Test Report: KVM_Linux_crio 19530

                    
                      6d579fb1420e6d4e07520b8ad7db429a8522bbcd:2024-08-29:35998
                    
                

Test fail (29/320)

Order failed test Duration
33 TestAddons/parallel/Registry 74.22
34 TestAddons/parallel/Ingress 153.03
36 TestAddons/parallel/MetricsServer 352.98
164 TestMultiControlPlane/serial/StopSecondaryNode 141.93
166 TestMultiControlPlane/serial/RestartSecondaryNode 56.58
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 394.85
171 TestMultiControlPlane/serial/StopCluster 141.76
231 TestMultiNode/serial/RestartKeepsNodes 324.72
233 TestMultiNode/serial/StopMultiNode 141.34
240 TestPreload 267.82
248 TestKubernetesUpgrade 370.04
288 TestStartStop/group/old-k8s-version/serial/FirstStart 274.72
300 TestStartStop/group/no-preload/serial/Stop 138.96
303 TestStartStop/group/embed-certs/serial/Stop 139.04
313 TestStartStop/group/old-k8s-version/serial/DeployApp 0.52
316 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 110.55
317 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
318 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
323 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.15
326 TestStartStop/group/old-k8s-version/serial/SecondStart 712.06
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.04
330 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.24
331 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.2
332 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.33
333 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 386.88
334 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 542.19
335 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 287.28
336 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 157.92
x
+
TestAddons/parallel/Registry (74.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.418397ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-dmlc6" [074412f0-2988-4497-a2bb-abd86ddc18ab] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004134567s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-x5bqm" [45f795aa-aca5-41b5-a455-89b285ce9531] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005688176s
addons_test.go:342: (dbg) Run:  kubectl --context addons-344587 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-344587 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-344587 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.082286492s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-344587 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-344587 ip
2024/08/29 19:07:59 [DEBUG] GET http://192.168.39.172:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-344587 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-344587 -n addons-344587
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-344587 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-344587 logs -n 25: (1.500116514s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC | 29 Aug 24 18:55 UTC |
	| delete  | -p download-only-800504                                                                     | download-only-800504 | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC | 29 Aug 24 18:55 UTC |
	| start   | -o=json --download-only                                                                     | download-only-273933 | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC |                     |
	|         | -p download-only-273933                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC | 29 Aug 24 18:55 UTC |
	| delete  | -p download-only-273933                                                                     | download-only-273933 | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC | 29 Aug 24 18:55 UTC |
	| delete  | -p download-only-800504                                                                     | download-only-800504 | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC | 29 Aug 24 18:55 UTC |
	| delete  | -p download-only-273933                                                                     | download-only-273933 | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC | 29 Aug 24 18:55 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-124601 | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC |                     |
	|         | binary-mirror-124601                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41153                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-124601                                                                     | binary-mirror-124601 | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC | 29 Aug 24 18:55 UTC |
	| addons  | enable dashboard -p                                                                         | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC |                     |
	|         | addons-344587                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC |                     |
	|         | addons-344587                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-344587 --wait=true                                                                | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC | 29 Aug 24 18:58 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:06 UTC | 29 Aug 24 19:07 UTC |
	|         | addons-344587                                                                               |                      |         |         |                     |                     |
	| addons  | addons-344587 addons disable                                                                | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:07 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-344587 addons disable                                                                | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:07 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:07 UTC |
	|         | -p addons-344587                                                                            |                      |         |         |                     |                     |
	| addons  | addons-344587 addons                                                                        | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:07 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-344587 addons                                                                        | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:07 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-344587 ssh cat                                                                       | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:07 UTC |
	|         | /opt/local-path-provisioner/pvc-d653ba56-6232-4797-9e26-74b3f827dc87_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-344587 addons disable                                                                | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:07 UTC |
	|         | addons-344587                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:07 UTC |
	|         | -p addons-344587                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-344587 addons disable                                                                | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC |                     |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-344587 ip                                                                            | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:07 UTC |
	| addons  | addons-344587 addons disable                                                                | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:08 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:55:50
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:55:50.381982   18990 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:55:50.382091   18990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:55:50.382099   18990 out.go:358] Setting ErrFile to fd 2...
	I0829 18:55:50.382103   18990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:55:50.382261   18990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 18:55:50.382847   18990 out.go:352] Setting JSON to false
	I0829 18:55:50.383602   18990 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2297,"bootTime":1724955453,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:55:50.383652   18990 start.go:139] virtualization: kvm guest
	I0829 18:55:50.385939   18990 out.go:177] * [addons-344587] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:55:50.387376   18990 out.go:177]   - MINIKUBE_LOCATION=19530
	I0829 18:55:50.387387   18990 notify.go:220] Checking for updates...
	I0829 18:55:50.389960   18990 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:55:50.391173   18990 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 18:55:50.392418   18990 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 18:55:50.393615   18990 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:55:50.394904   18990 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:55:50.396433   18990 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:55:50.428475   18990 out.go:177] * Using the kvm2 driver based on user configuration
	I0829 18:55:50.429854   18990 start.go:297] selected driver: kvm2
	I0829 18:55:50.429864   18990 start.go:901] validating driver "kvm2" against <nil>
	I0829 18:55:50.429873   18990 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:55:50.430509   18990 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:55:50.430589   18990 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19530-11185/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 18:55:50.444888   18990 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 18:55:50.444932   18990 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:55:50.445130   18990 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:55:50.445196   18990 cni.go:84] Creating CNI manager for ""
	I0829 18:55:50.445212   18990 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 18:55:50.445222   18990 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 18:55:50.445293   18990 start.go:340] cluster config:
	{Name:addons-344587 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-344587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:55:50.445402   18990 iso.go:125] acquiring lock: {Name:mk1c9d3ac7f423dd4657884e37bdf4359f6328d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:55:50.447108   18990 out.go:177] * Starting "addons-344587" primary control-plane node in "addons-344587" cluster
	I0829 18:55:50.448355   18990 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:55:50.448396   18990 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 18:55:50.448405   18990 cache.go:56] Caching tarball of preloaded images
	I0829 18:55:50.448475   18990 preload.go:172] Found /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 18:55:50.448487   18990 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 18:55:50.448826   18990 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/config.json ...
	I0829 18:55:50.448852   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/config.json: {Name:mkbebd6be4c06f31a480a2816ef4d17f65638f42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:55:50.448990   18990 start.go:360] acquireMachinesLock for addons-344587: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 18:55:50.449049   18990 start.go:364] duration metric: took 44.089µs to acquireMachinesLock for "addons-344587"
	I0829 18:55:50.449073   18990 start.go:93] Provisioning new machine with config: &{Name:addons-344587 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-344587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:55:50.449138   18990 start.go:125] createHost starting for "" (driver="kvm2")
	I0829 18:55:50.450643   18990 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0829 18:55:50.450772   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:55:50.450820   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:55:50.464579   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36837
	I0829 18:55:50.464968   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:55:50.465424   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:55:50.465444   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:55:50.465798   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:55:50.465987   18990 main.go:141] libmachine: (addons-344587) Calling .GetMachineName
	I0829 18:55:50.466159   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:55:50.466300   18990 start.go:159] libmachine.API.Create for "addons-344587" (driver="kvm2")
	I0829 18:55:50.466328   18990 client.go:168] LocalClient.Create starting
	I0829 18:55:50.466375   18990 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem
	I0829 18:55:50.795899   18990 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem
	I0829 18:55:50.842743   18990 main.go:141] libmachine: Running pre-create checks...
	I0829 18:55:50.842764   18990 main.go:141] libmachine: (addons-344587) Calling .PreCreateCheck
	I0829 18:55:50.843261   18990 main.go:141] libmachine: (addons-344587) Calling .GetConfigRaw
	I0829 18:55:50.843665   18990 main.go:141] libmachine: Creating machine...
	I0829 18:55:50.843678   18990 main.go:141] libmachine: (addons-344587) Calling .Create
	I0829 18:55:50.843802   18990 main.go:141] libmachine: (addons-344587) Creating KVM machine...
	I0829 18:55:50.844841   18990 main.go:141] libmachine: (addons-344587) DBG | found existing default KVM network
	I0829 18:55:50.845576   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:50.845449   19012 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0829 18:55:50.845599   18990 main.go:141] libmachine: (addons-344587) DBG | created network xml: 
	I0829 18:55:50.845612   18990 main.go:141] libmachine: (addons-344587) DBG | <network>
	I0829 18:55:50.845626   18990 main.go:141] libmachine: (addons-344587) DBG |   <name>mk-addons-344587</name>
	I0829 18:55:50.845668   18990 main.go:141] libmachine: (addons-344587) DBG |   <dns enable='no'/>
	I0829 18:55:50.845695   18990 main.go:141] libmachine: (addons-344587) DBG |   
	I0829 18:55:50.845709   18990 main.go:141] libmachine: (addons-344587) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0829 18:55:50.845719   18990 main.go:141] libmachine: (addons-344587) DBG |     <dhcp>
	I0829 18:55:50.845731   18990 main.go:141] libmachine: (addons-344587) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0829 18:55:50.845742   18990 main.go:141] libmachine: (addons-344587) DBG |     </dhcp>
	I0829 18:55:50.845753   18990 main.go:141] libmachine: (addons-344587) DBG |   </ip>
	I0829 18:55:50.845762   18990 main.go:141] libmachine: (addons-344587) DBG |   
	I0829 18:55:50.845771   18990 main.go:141] libmachine: (addons-344587) DBG | </network>
	I0829 18:55:50.845781   18990 main.go:141] libmachine: (addons-344587) DBG | 
	I0829 18:55:50.850798   18990 main.go:141] libmachine: (addons-344587) DBG | trying to create private KVM network mk-addons-344587 192.168.39.0/24...
	I0829 18:55:50.914004   18990 main.go:141] libmachine: (addons-344587) DBG | private KVM network mk-addons-344587 192.168.39.0/24 created
	I0829 18:55:50.914032   18990 main.go:141] libmachine: (addons-344587) Setting up store path in /home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587 ...
	I0829 18:55:50.914058   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:50.913976   19012 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 18:55:50.914082   18990 main.go:141] libmachine: (addons-344587) Building disk image from file:///home/jenkins/minikube-integration/19530-11185/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso
	I0829 18:55:50.914101   18990 main.go:141] libmachine: (addons-344587) Downloading /home/jenkins/minikube-integration/19530-11185/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19530-11185/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso...
	I0829 18:55:51.165621   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:51.165525   19012 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa...
	I0829 18:55:51.361310   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:51.361174   19012 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/addons-344587.rawdisk...
	I0829 18:55:51.361334   18990 main.go:141] libmachine: (addons-344587) DBG | Writing magic tar header
	I0829 18:55:51.361345   18990 main.go:141] libmachine: (addons-344587) DBG | Writing SSH key tar header
	I0829 18:55:51.361360   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:51.361285   19012 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587 ...
	I0829 18:55:51.361376   18990 main.go:141] libmachine: (addons-344587) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587
	I0829 18:55:51.361413   18990 main.go:141] libmachine: (addons-344587) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587 (perms=drwx------)
	I0829 18:55:51.361435   18990 main.go:141] libmachine: (addons-344587) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube/machines (perms=drwxr-xr-x)
	I0829 18:55:51.361442   18990 main.go:141] libmachine: (addons-344587) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube (perms=drwxr-xr-x)
	I0829 18:55:51.361449   18990 main.go:141] libmachine: (addons-344587) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube/machines
	I0829 18:55:51.361457   18990 main.go:141] libmachine: (addons-344587) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 18:55:51.361462   18990 main.go:141] libmachine: (addons-344587) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185
	I0829 18:55:51.361470   18990 main.go:141] libmachine: (addons-344587) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0829 18:55:51.361480   18990 main.go:141] libmachine: (addons-344587) DBG | Checking permissions on dir: /home/jenkins
	I0829 18:55:51.361487   18990 main.go:141] libmachine: (addons-344587) DBG | Checking permissions on dir: /home
	I0829 18:55:51.361492   18990 main.go:141] libmachine: (addons-344587) DBG | Skipping /home - not owner
	I0829 18:55:51.361519   18990 main.go:141] libmachine: (addons-344587) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185 (perms=drwxrwxr-x)
	I0829 18:55:51.361543   18990 main.go:141] libmachine: (addons-344587) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0829 18:55:51.361552   18990 main.go:141] libmachine: (addons-344587) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0829 18:55:51.361557   18990 main.go:141] libmachine: (addons-344587) Creating domain...
	I0829 18:55:51.362695   18990 main.go:141] libmachine: (addons-344587) define libvirt domain using xml: 
	I0829 18:55:51.362720   18990 main.go:141] libmachine: (addons-344587) <domain type='kvm'>
	I0829 18:55:51.362728   18990 main.go:141] libmachine: (addons-344587)   <name>addons-344587</name>
	I0829 18:55:51.362733   18990 main.go:141] libmachine: (addons-344587)   <memory unit='MiB'>4000</memory>
	I0829 18:55:51.362739   18990 main.go:141] libmachine: (addons-344587)   <vcpu>2</vcpu>
	I0829 18:55:51.362743   18990 main.go:141] libmachine: (addons-344587)   <features>
	I0829 18:55:51.362748   18990 main.go:141] libmachine: (addons-344587)     <acpi/>
	I0829 18:55:51.362755   18990 main.go:141] libmachine: (addons-344587)     <apic/>
	I0829 18:55:51.362760   18990 main.go:141] libmachine: (addons-344587)     <pae/>
	I0829 18:55:51.362764   18990 main.go:141] libmachine: (addons-344587)     
	I0829 18:55:51.362770   18990 main.go:141] libmachine: (addons-344587)   </features>
	I0829 18:55:51.362775   18990 main.go:141] libmachine: (addons-344587)   <cpu mode='host-passthrough'>
	I0829 18:55:51.362780   18990 main.go:141] libmachine: (addons-344587)   
	I0829 18:55:51.362786   18990 main.go:141] libmachine: (addons-344587)   </cpu>
	I0829 18:55:51.362794   18990 main.go:141] libmachine: (addons-344587)   <os>
	I0829 18:55:51.362799   18990 main.go:141] libmachine: (addons-344587)     <type>hvm</type>
	I0829 18:55:51.362807   18990 main.go:141] libmachine: (addons-344587)     <boot dev='cdrom'/>
	I0829 18:55:51.362812   18990 main.go:141] libmachine: (addons-344587)     <boot dev='hd'/>
	I0829 18:55:51.362820   18990 main.go:141] libmachine: (addons-344587)     <bootmenu enable='no'/>
	I0829 18:55:51.362826   18990 main.go:141] libmachine: (addons-344587)   </os>
	I0829 18:55:51.362855   18990 main.go:141] libmachine: (addons-344587)   <devices>
	I0829 18:55:51.362878   18990 main.go:141] libmachine: (addons-344587)     <disk type='file' device='cdrom'>
	I0829 18:55:51.362903   18990 main.go:141] libmachine: (addons-344587)       <source file='/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/boot2docker.iso'/>
	I0829 18:55:51.362920   18990 main.go:141] libmachine: (addons-344587)       <target dev='hdc' bus='scsi'/>
	I0829 18:55:51.362957   18990 main.go:141] libmachine: (addons-344587)       <readonly/>
	I0829 18:55:51.362969   18990 main.go:141] libmachine: (addons-344587)     </disk>
	I0829 18:55:51.362980   18990 main.go:141] libmachine: (addons-344587)     <disk type='file' device='disk'>
	I0829 18:55:51.362995   18990 main.go:141] libmachine: (addons-344587)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0829 18:55:51.363011   18990 main.go:141] libmachine: (addons-344587)       <source file='/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/addons-344587.rawdisk'/>
	I0829 18:55:51.363023   18990 main.go:141] libmachine: (addons-344587)       <target dev='hda' bus='virtio'/>
	I0829 18:55:51.363036   18990 main.go:141] libmachine: (addons-344587)     </disk>
	I0829 18:55:51.363048   18990 main.go:141] libmachine: (addons-344587)     <interface type='network'>
	I0829 18:55:51.363068   18990 main.go:141] libmachine: (addons-344587)       <source network='mk-addons-344587'/>
	I0829 18:55:51.363090   18990 main.go:141] libmachine: (addons-344587)       <model type='virtio'/>
	I0829 18:55:51.363098   18990 main.go:141] libmachine: (addons-344587)     </interface>
	I0829 18:55:51.363103   18990 main.go:141] libmachine: (addons-344587)     <interface type='network'>
	I0829 18:55:51.363119   18990 main.go:141] libmachine: (addons-344587)       <source network='default'/>
	I0829 18:55:51.363133   18990 main.go:141] libmachine: (addons-344587)       <model type='virtio'/>
	I0829 18:55:51.363144   18990 main.go:141] libmachine: (addons-344587)     </interface>
	I0829 18:55:51.363151   18990 main.go:141] libmachine: (addons-344587)     <serial type='pty'>
	I0829 18:55:51.363157   18990 main.go:141] libmachine: (addons-344587)       <target port='0'/>
	I0829 18:55:51.363165   18990 main.go:141] libmachine: (addons-344587)     </serial>
	I0829 18:55:51.363192   18990 main.go:141] libmachine: (addons-344587)     <console type='pty'>
	I0829 18:55:51.363222   18990 main.go:141] libmachine: (addons-344587)       <target type='serial' port='0'/>
	I0829 18:55:51.363237   18990 main.go:141] libmachine: (addons-344587)     </console>
	I0829 18:55:51.363245   18990 main.go:141] libmachine: (addons-344587)     <rng model='virtio'>
	I0829 18:55:51.363258   18990 main.go:141] libmachine: (addons-344587)       <backend model='random'>/dev/random</backend>
	I0829 18:55:51.363267   18990 main.go:141] libmachine: (addons-344587)     </rng>
	I0829 18:55:51.363279   18990 main.go:141] libmachine: (addons-344587)     
	I0829 18:55:51.363289   18990 main.go:141] libmachine: (addons-344587)     
	I0829 18:55:51.363301   18990 main.go:141] libmachine: (addons-344587)   </devices>
	I0829 18:55:51.363319   18990 main.go:141] libmachine: (addons-344587) </domain>
	I0829 18:55:51.363334   18990 main.go:141] libmachine: (addons-344587) 
	I0829 18:55:51.369959   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:1d:b5:8e in network default
	I0829 18:55:51.370417   18990 main.go:141] libmachine: (addons-344587) Ensuring networks are active...
	I0829 18:55:51.370435   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:51.371026   18990 main.go:141] libmachine: (addons-344587) Ensuring network default is active
	I0829 18:55:51.371287   18990 main.go:141] libmachine: (addons-344587) Ensuring network mk-addons-344587 is active
	I0829 18:55:51.372284   18990 main.go:141] libmachine: (addons-344587) Getting domain xml...
	I0829 18:55:51.372893   18990 main.go:141] libmachine: (addons-344587) Creating domain...
	I0829 18:55:52.746079   18990 main.go:141] libmachine: (addons-344587) Waiting to get IP...
	I0829 18:55:52.746802   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:52.747139   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:52.747169   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:52.747092   19012 retry.go:31] will retry after 281.547466ms: waiting for machine to come up
	I0829 18:55:53.030572   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:53.031020   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:53.031046   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:53.030987   19012 retry.go:31] will retry after 320.244389ms: waiting for machine to come up
	I0829 18:55:53.352319   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:53.352723   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:53.352751   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:53.352677   19012 retry.go:31] will retry after 475.897243ms: waiting for machine to come up
	I0829 18:55:53.830271   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:53.830799   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:53.830826   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:53.830758   19012 retry.go:31] will retry after 415.393917ms: waiting for machine to come up
	I0829 18:55:54.247242   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:54.247686   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:54.247722   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:54.247646   19012 retry.go:31] will retry after 663.283802ms: waiting for machine to come up
	I0829 18:55:54.912468   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:54.912891   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:54.912917   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:54.912861   19012 retry.go:31] will retry after 823.255008ms: waiting for machine to come up
	I0829 18:55:55.737292   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:55.737672   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:55.737702   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:55.737654   19012 retry.go:31] will retry after 924.09927ms: waiting for machine to come up
	I0829 18:55:56.663683   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:56.664092   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:56.664117   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:56.664046   19012 retry.go:31] will retry after 1.475206367s: waiting for machine to come up
	I0829 18:55:58.141547   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:58.142031   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:58.142052   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:58.142003   19012 retry.go:31] will retry after 1.352228994s: waiting for machine to come up
	I0829 18:55:59.496409   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:59.496870   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:59.496896   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:59.496821   19012 retry.go:31] will retry after 2.187164775s: waiting for machine to come up
	I0829 18:56:01.685976   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:01.686371   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:56:01.686393   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:56:01.686346   19012 retry.go:31] will retry after 2.735265922s: waiting for machine to come up
	I0829 18:56:04.422715   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:04.423157   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:56:04.423172   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:56:04.423133   19012 retry.go:31] will retry after 2.867752561s: waiting for machine to come up
	I0829 18:56:07.292218   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:07.292615   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:56:07.292641   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:56:07.292570   19012 retry.go:31] will retry after 4.389513147s: waiting for machine to come up
	I0829 18:56:11.683601   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:11.684092   18990 main.go:141] libmachine: (addons-344587) Found IP for machine: 192.168.39.172
	I0829 18:56:11.684118   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has current primary IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:11.684127   18990 main.go:141] libmachine: (addons-344587) Reserving static IP address...
	I0829 18:56:11.684501   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find host DHCP lease matching {name: "addons-344587", mac: "52:54:00:03:42:33", ip: "192.168.39.172"} in network mk-addons-344587
	I0829 18:56:11.822664   18990 main.go:141] libmachine: (addons-344587) DBG | Getting to WaitForSSH function...
	I0829 18:56:11.822759   18990 main.go:141] libmachine: (addons-344587) Reserved static IP address: 192.168.39.172
	I0829 18:56:11.822780   18990 main.go:141] libmachine: (addons-344587) Waiting for SSH to be available...
	I0829 18:56:11.825035   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:11.825430   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:minikube Clientid:01:52:54:00:03:42:33}
	I0829 18:56:11.825460   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:11.825623   18990 main.go:141] libmachine: (addons-344587) DBG | Using SSH client type: external
	I0829 18:56:11.825652   18990 main.go:141] libmachine: (addons-344587) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa (-rw-------)
	I0829 18:56:11.825693   18990 main.go:141] libmachine: (addons-344587) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.172 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 18:56:11.825713   18990 main.go:141] libmachine: (addons-344587) DBG | About to run SSH command:
	I0829 18:56:11.825728   18990 main.go:141] libmachine: (addons-344587) DBG | exit 0
	I0829 18:56:11.958392   18990 main.go:141] libmachine: (addons-344587) DBG | SSH cmd err, output: <nil>: 
	I0829 18:56:11.958658   18990 main.go:141] libmachine: (addons-344587) KVM machine creation complete!
	I0829 18:56:11.958964   18990 main.go:141] libmachine: (addons-344587) Calling .GetConfigRaw
	I0829 18:56:11.979533   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:11.979843   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:11.980024   18990 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0829 18:56:11.980042   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:11.981444   18990 main.go:141] libmachine: Detecting operating system of created instance...
	I0829 18:56:11.981459   18990 main.go:141] libmachine: Waiting for SSH to be available...
	I0829 18:56:11.981466   18990 main.go:141] libmachine: Getting to WaitForSSH function...
	I0829 18:56:11.981474   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:11.983980   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:11.984292   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:11.984313   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:11.984444   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:11.984613   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:11.984770   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:11.984916   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:11.985127   18990 main.go:141] libmachine: Using SSH client type: native
	I0829 18:56:11.985342   18990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0829 18:56:11.985357   18990 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0829 18:56:12.089723   18990 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:56:12.089742   18990 main.go:141] libmachine: Detecting the provisioner...
	I0829 18:56:12.089749   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:12.092754   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.093106   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:12.093131   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.093284   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:12.093486   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:12.093657   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:12.093787   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:12.093942   18990 main.go:141] libmachine: Using SSH client type: native
	I0829 18:56:12.094126   18990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0829 18:56:12.094139   18990 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0829 18:56:12.199320   18990 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0829 18:56:12.199392   18990 main.go:141] libmachine: found compatible host: buildroot
	I0829 18:56:12.199401   18990 main.go:141] libmachine: Provisioning with buildroot...
	I0829 18:56:12.199410   18990 main.go:141] libmachine: (addons-344587) Calling .GetMachineName
	I0829 18:56:12.199644   18990 buildroot.go:166] provisioning hostname "addons-344587"
	I0829 18:56:12.199675   18990 main.go:141] libmachine: (addons-344587) Calling .GetMachineName
	I0829 18:56:12.199823   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:12.202332   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.202658   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:12.202684   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.202849   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:12.203092   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:12.203227   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:12.203390   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:12.203529   18990 main.go:141] libmachine: Using SSH client type: native
	I0829 18:56:12.203692   18990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0829 18:56:12.203705   18990 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-344587 && echo "addons-344587" | sudo tee /etc/hostname
	I0829 18:56:12.320497   18990 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-344587
	
	I0829 18:56:12.320526   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:12.323075   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.323387   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:12.323411   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.323589   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:12.323786   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:12.323975   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:12.324113   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:12.324283   18990 main.go:141] libmachine: Using SSH client type: native
	I0829 18:56:12.324480   18990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0829 18:56:12.324504   18990 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-344587' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-344587/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-344587' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 18:56:12.439927   18990 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:56:12.439966   18990 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 18:56:12.440002   18990 buildroot.go:174] setting up certificates
	I0829 18:56:12.440016   18990 provision.go:84] configureAuth start
	I0829 18:56:12.440030   18990 main.go:141] libmachine: (addons-344587) Calling .GetMachineName
	I0829 18:56:12.440343   18990 main.go:141] libmachine: (addons-344587) Calling .GetIP
	I0829 18:56:12.442796   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.443174   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:12.443192   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.443334   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:12.445622   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.446147   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:12.446173   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.446336   18990 provision.go:143] copyHostCerts
	I0829 18:56:12.446417   18990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 18:56:12.446555   18990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 18:56:12.446655   18990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 18:56:12.446738   18990 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.addons-344587 san=[127.0.0.1 192.168.39.172 addons-344587 localhost minikube]
	I0829 18:56:12.656811   18990 provision.go:177] copyRemoteCerts
	I0829 18:56:12.656860   18990 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 18:56:12.656881   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:12.659602   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.659950   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:12.659986   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.660127   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:12.660284   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:12.660452   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:12.660569   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:12.740979   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 18:56:12.764765   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 18:56:12.789751   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0829 18:56:12.814999   18990 provision.go:87] duration metric: took 374.97013ms to configureAuth
	I0829 18:56:12.815029   18990 buildroot.go:189] setting minikube options for container-runtime
	I0829 18:56:12.815219   18990 config.go:182] Loaded profile config "addons-344587": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:56:12.815307   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:12.817789   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.818126   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:12.818155   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.818312   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:12.818507   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:12.818700   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:12.818849   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:12.819046   18990 main.go:141] libmachine: Using SSH client type: native
	I0829 18:56:12.819234   18990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0829 18:56:12.819254   18990 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 18:56:13.034009   18990 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 18:56:13.034032   18990 main.go:141] libmachine: Checking connection to Docker...
	I0829 18:56:13.034040   18990 main.go:141] libmachine: (addons-344587) Calling .GetURL
	I0829 18:56:13.035499   18990 main.go:141] libmachine: (addons-344587) DBG | Using libvirt version 6000000
	I0829 18:56:13.037684   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.038017   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:13.038048   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.038196   18990 main.go:141] libmachine: Docker is up and running!
	I0829 18:56:13.038210   18990 main.go:141] libmachine: Reticulating splines...
	I0829 18:56:13.038219   18990 client.go:171] duration metric: took 22.571881082s to LocalClient.Create
	I0829 18:56:13.038239   18990 start.go:167] duration metric: took 22.5719417s to libmachine.API.Create "addons-344587"
	I0829 18:56:13.038262   18990 start.go:293] postStartSetup for "addons-344587" (driver="kvm2")
	I0829 18:56:13.038277   18990 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 18:56:13.038298   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:13.038570   18990 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 18:56:13.038589   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:13.040755   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.041066   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:13.041089   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.041223   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:13.041426   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:13.041595   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:13.041734   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:13.124537   18990 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 18:56:13.129327   18990 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 18:56:13.129348   18990 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 18:56:13.129400   18990 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 18:56:13.129423   18990 start.go:296] duration metric: took 91.15174ms for postStartSetup
	I0829 18:56:13.129451   18990 main.go:141] libmachine: (addons-344587) Calling .GetConfigRaw
	I0829 18:56:13.130128   18990 main.go:141] libmachine: (addons-344587) Calling .GetIP
	I0829 18:56:13.132903   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.133252   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:13.133280   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.133484   18990 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/config.json ...
	I0829 18:56:13.133661   18990 start.go:128] duration metric: took 22.68451279s to createHost
	I0829 18:56:13.133686   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:13.135794   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.136096   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:13.136138   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.136227   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:13.136392   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:13.136531   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:13.136674   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:13.136811   18990 main.go:141] libmachine: Using SSH client type: native
	I0829 18:56:13.136983   18990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0829 18:56:13.136995   18990 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 18:56:13.239138   18990 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724957773.212403643
	
	I0829 18:56:13.239157   18990 fix.go:216] guest clock: 1724957773.212403643
	I0829 18:56:13.239164   18990 fix.go:229] Guest: 2024-08-29 18:56:13.212403643 +0000 UTC Remote: 2024-08-29 18:56:13.133675132 +0000 UTC m=+22.790316868 (delta=78.728511ms)
	I0829 18:56:13.239198   18990 fix.go:200] guest clock delta is within tolerance: 78.728511ms
	I0829 18:56:13.239202   18990 start.go:83] releasing machines lock for "addons-344587", held for 22.79014265s
	I0829 18:56:13.239220   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:13.239471   18990 main.go:141] libmachine: (addons-344587) Calling .GetIP
	I0829 18:56:13.241933   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.242288   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:13.242315   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.242500   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:13.243032   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:13.243240   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:13.243311   18990 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 18:56:13.243361   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:13.243466   18990 ssh_runner.go:195] Run: cat /version.json
	I0829 18:56:13.243481   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:13.245923   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.246013   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.246307   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:13.246336   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:13.246367   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.246384   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.246467   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:13.246620   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:13.246682   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:13.246812   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:13.246884   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:13.246957   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:13.247020   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:13.247050   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:13.348157   18990 ssh_runner.go:195] Run: systemctl --version
	I0829 18:56:13.354123   18990 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 18:56:13.512934   18990 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 18:56:13.518830   18990 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 18:56:13.518882   18990 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 18:56:13.534127   18990 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 18:56:13.534157   18990 start.go:495] detecting cgroup driver to use...
	I0829 18:56:13.534210   18990 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 18:56:13.549103   18990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 18:56:13.562524   18990 docker.go:217] disabling cri-docker service (if available) ...
	I0829 18:56:13.562603   18990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 18:56:13.575308   18990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 18:56:13.588019   18990 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 18:56:13.695971   18990 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 18:56:13.849315   18990 docker.go:233] disabling docker service ...
	I0829 18:56:13.849370   18990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 18:56:13.863202   18990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 18:56:13.876345   18990 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 18:56:13.998451   18990 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 18:56:14.110447   18990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 18:56:14.124269   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:56:14.142618   18990 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 18:56:14.142671   18990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:56:14.152550   18990 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 18:56:14.152638   18990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:56:14.162565   18990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:56:14.172204   18990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:56:14.182051   18990 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 18:56:14.191938   18990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:56:14.201619   18990 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:56:14.218380   18990 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:56:14.228433   18990 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 18:56:14.237357   18990 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 18:56:14.237406   18990 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 18:56:14.249575   18990 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 18:56:14.259454   18990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:56:14.369394   18990 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 18:56:14.456184   18990 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 18:56:14.456279   18990 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 18:56:14.460789   18990 start.go:563] Will wait 60s for crictl version
	I0829 18:56:14.460854   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:56:14.464432   18990 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 18:56:14.504874   18990 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 18:56:14.504990   18990 ssh_runner.go:195] Run: crio --version
	I0829 18:56:14.532672   18990 ssh_runner.go:195] Run: crio --version
	I0829 18:56:14.561543   18990 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 18:56:14.562632   18990 main.go:141] libmachine: (addons-344587) Calling .GetIP
	I0829 18:56:14.564933   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:14.565284   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:14.565303   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:14.565524   18990 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 18:56:14.569376   18990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:56:14.581262   18990 kubeadm.go:883] updating cluster {Name:addons-344587 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-344587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 18:56:14.581356   18990 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:56:14.581398   18990 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 18:56:14.613224   18990 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 18:56:14.613292   18990 ssh_runner.go:195] Run: which lz4
	I0829 18:56:14.617034   18990 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 18:56:14.621198   18990 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 18:56:14.621221   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 18:56:15.914421   18990 crio.go:462] duration metric: took 1.297408054s to copy over tarball
	I0829 18:56:15.914486   18990 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 18:56:18.044985   18990 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.130478632s)
	I0829 18:56:18.045014   18990 crio.go:469] duration metric: took 2.130566777s to extract the tarball
	I0829 18:56:18.045024   18990 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 18:56:18.081642   18990 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 18:56:18.123715   18990 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 18:56:18.123734   18990 cache_images.go:84] Images are preloaded, skipping loading
	I0829 18:56:18.123741   18990 kubeadm.go:934] updating node { 192.168.39.172 8443 v1.31.0 crio true true} ...
	I0829 18:56:18.123833   18990 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-344587 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-344587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 18:56:18.123903   18990 ssh_runner.go:195] Run: crio config
	I0829 18:56:18.173364   18990 cni.go:84] Creating CNI manager for ""
	I0829 18:56:18.173382   18990 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 18:56:18.173396   18990 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 18:56:18.173417   18990 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.172 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-344587 NodeName:addons-344587 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 18:56:18.173545   18990 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-344587"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.172
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.172"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 18:56:18.173599   18990 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 18:56:18.183496   18990 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 18:56:18.183559   18990 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 18:56:18.192837   18990 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0829 18:56:18.209828   18990 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 18:56:18.226818   18990 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0829 18:56:18.243177   18990 ssh_runner.go:195] Run: grep 192.168.39.172	control-plane.minikube.internal$ /etc/hosts
	I0829 18:56:18.246821   18990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.172	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:56:18.258454   18990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:56:18.380809   18990 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:56:18.399109   18990 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587 for IP: 192.168.39.172
	I0829 18:56:18.399130   18990 certs.go:194] generating shared ca certs ...
	I0829 18:56:18.399144   18990 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:18.399287   18990 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 18:56:18.507759   18990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt ...
	I0829 18:56:18.507786   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt: {Name:mkf2998f14816a9d649599681f5ace2bd3b15bb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:18.507943   18990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key ...
	I0829 18:56:18.507953   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key: {Name:mk0f1ef094971ea9c3f026c8290bde66a6036be5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:18.508026   18990 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 18:56:18.881398   18990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt ...
	I0829 18:56:18.881427   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt: {Name:mka4d0216f76512ed90b83996ade7ed626417b29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:18.881614   18990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key ...
	I0829 18:56:18.881630   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key: {Name:mka035b87075afcde930c062c2cb1875970dabb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:18.881727   18990 certs.go:256] generating profile certs ...
	I0829 18:56:18.881782   18990 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.key
	I0829 18:56:18.881793   18990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt with IP's: []
	I0829 18:56:19.191129   18990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt ...
	I0829 18:56:19.191157   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: {Name:mk595166ed3f22afaf54fdfb0b502bd573fc8143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:19.191339   18990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.key ...
	I0829 18:56:19.191354   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.key: {Name:mk24baca044bca79b73024c8a04b788113a0b022 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:19.191449   18990 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.key.2d54a6f6
	I0829 18:56:19.191470   18990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.crt.2d54a6f6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.172]
	I0829 18:56:19.236337   18990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.crt.2d54a6f6 ...
	I0829 18:56:19.236366   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.crt.2d54a6f6: {Name:mk40299d1f1b871b96fc8c21ef18cc9e856fbcfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:19.236555   18990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.key.2d54a6f6 ...
	I0829 18:56:19.236572   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.key.2d54a6f6: {Name:mk04263d045cce1f76651eeb698397ced0bec497 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:19.236669   18990 certs.go:381] copying /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.crt.2d54a6f6 -> /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.crt
	I0829 18:56:19.236739   18990 certs.go:385] copying /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.key.2d54a6f6 -> /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.key
	I0829 18:56:19.236796   18990 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/proxy-client.key
	I0829 18:56:19.236809   18990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/proxy-client.crt with IP's: []
	I0829 18:56:19.327890   18990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/proxy-client.crt ...
	I0829 18:56:19.327915   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/proxy-client.crt: {Name:mked626427b26604c6ca53369dde755937686f96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:19.328088   18990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/proxy-client.key ...
	I0829 18:56:19.328101   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/proxy-client.key: {Name:mk953deec79398c279f957cbebec5a918222e73e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:19.328285   18990 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 18:56:19.328319   18990 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 18:56:19.328339   18990 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 18:56:19.328360   18990 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 18:56:19.328883   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 18:56:19.352639   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 18:56:19.375448   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 18:56:19.397949   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 18:56:19.420280   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0829 18:56:19.445420   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 18:56:19.469941   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 18:56:19.493954   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 18:56:19.517481   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 18:56:19.540749   18990 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 18:56:19.557225   18990 ssh_runner.go:195] Run: openssl version
	I0829 18:56:19.563530   18990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 18:56:19.574899   18990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:56:19.579661   18990 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:56:19.579718   18990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:56:19.585781   18990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 18:56:19.596492   18990 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 18:56:19.600908   18990 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 18:56:19.600959   18990 kubeadm.go:392] StartCluster: {Name:addons-344587 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-344587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:56:19.601045   18990 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 18:56:19.601093   18990 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 18:56:19.637569   18990 cri.go:89] found id: ""
	I0829 18:56:19.637643   18990 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 18:56:19.647689   18990 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 18:56:19.657011   18990 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 18:56:19.666328   18990 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 18:56:19.666343   18990 kubeadm.go:157] found existing configuration files:
	
	I0829 18:56:19.666376   18990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 18:56:19.675716   18990 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 18:56:19.675775   18990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 18:56:19.685386   18990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 18:56:19.694416   18990 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 18:56:19.694471   18990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 18:56:19.703922   18990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 18:56:19.712826   18990 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 18:56:19.712873   18990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 18:56:19.722059   18990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 18:56:19.731001   18990 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 18:56:19.731050   18990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 18:56:19.740296   18990 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 18:56:19.794947   18990 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 18:56:19.795099   18990 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 18:56:19.898282   18990 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 18:56:19.898409   18990 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 18:56:19.898526   18990 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 18:56:19.907493   18990 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 18:56:19.997177   18990 out.go:235]   - Generating certificates and keys ...
	I0829 18:56:19.997273   18990 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 18:56:19.997359   18990 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 18:56:19.997433   18990 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0829 18:56:20.334115   18990 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0829 18:56:20.488051   18990 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0829 18:56:20.567263   18990 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0829 18:56:20.715089   18990 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0829 18:56:20.715281   18990 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-344587 localhost] and IPs [192.168.39.172 127.0.0.1 ::1]
	I0829 18:56:21.029598   18990 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0829 18:56:21.029765   18990 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-344587 localhost] and IPs [192.168.39.172 127.0.0.1 ::1]
	I0829 18:56:21.106114   18990 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0829 18:56:21.317964   18990 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0829 18:56:21.407628   18990 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0829 18:56:21.407696   18990 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 18:56:21.629562   18990 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 18:56:21.754916   18990 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 18:56:21.931143   18990 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 18:56:22.124355   18990 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 18:56:22.279253   18990 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 18:56:22.279642   18990 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 18:56:22.282088   18990 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 18:56:22.284191   18990 out.go:235]   - Booting up control plane ...
	I0829 18:56:22.284310   18990 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 18:56:22.284403   18990 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 18:56:22.284482   18990 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 18:56:22.304603   18990 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 18:56:22.312804   18990 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 18:56:22.312862   18990 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 18:56:22.435203   18990 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 18:56:22.435353   18990 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 18:56:22.936484   18990 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.995362ms
	I0829 18:56:22.936601   18990 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 18:56:27.436410   18990 kubeadm.go:310] [api-check] The API server is healthy after 4.501398688s
	I0829 18:56:27.454666   18990 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 18:56:27.477429   18990 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 18:56:27.508526   18990 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 18:56:27.508785   18990 kubeadm.go:310] [mark-control-plane] Marking the node addons-344587 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 18:56:27.520541   18990 kubeadm.go:310] [bootstrap-token] Using token: q9x0a1.3m9323w9pql012fx
	I0829 18:56:27.521864   18990 out.go:235]   - Configuring RBAC rules ...
	I0829 18:56:27.521972   18990 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 18:56:27.526125   18990 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 18:56:27.535702   18990 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 18:56:27.539676   18990 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 18:56:27.542568   18990 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 18:56:27.548387   18990 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 18:56:27.844139   18990 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 18:56:28.295458   18990 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 18:56:28.840023   18990 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 18:56:28.840956   18990 kubeadm.go:310] 
	I0829 18:56:28.841053   18990 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 18:56:28.841063   18990 kubeadm.go:310] 
	I0829 18:56:28.841160   18990 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 18:56:28.841186   18990 kubeadm.go:310] 
	I0829 18:56:28.841234   18990 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 18:56:28.841322   18990 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 18:56:28.841395   18990 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 18:56:28.841404   18990 kubeadm.go:310] 
	I0829 18:56:28.841484   18990 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 18:56:28.841493   18990 kubeadm.go:310] 
	I0829 18:56:28.841553   18990 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 18:56:28.841567   18990 kubeadm.go:310] 
	I0829 18:56:28.841651   18990 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 18:56:28.841761   18990 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 18:56:28.841862   18990 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 18:56:28.841873   18990 kubeadm.go:310] 
	I0829 18:56:28.841975   18990 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 18:56:28.842087   18990 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 18:56:28.842100   18990 kubeadm.go:310] 
	I0829 18:56:28.842176   18990 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token q9x0a1.3m9323w9pql012fx \
	I0829 18:56:28.842267   18990 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef \
	I0829 18:56:28.842293   18990 kubeadm.go:310] 	--control-plane 
	I0829 18:56:28.842300   18990 kubeadm.go:310] 
	I0829 18:56:28.842391   18990 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 18:56:28.842407   18990 kubeadm.go:310] 
	I0829 18:56:28.842491   18990 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token q9x0a1.3m9323w9pql012fx \
	I0829 18:56:28.842651   18990 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef 
	I0829 18:56:28.843797   18990 kubeadm.go:310] W0829 18:56:19.773325     811 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:56:28.844133   18990 kubeadm.go:310] W0829 18:56:19.774616     811 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:56:28.844272   18990 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 18:56:28.844299   18990 cni.go:84] Creating CNI manager for ""
	I0829 18:56:28.844312   18990 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 18:56:28.846071   18990 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 18:56:28.847426   18990 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 18:56:28.857688   18990 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 18:56:28.878902   18990 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 18:56:28.878958   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:28.878992   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-344587 minikube.k8s.io/updated_at=2024_08_29T18_56_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033 minikube.k8s.io/name=addons-344587 minikube.k8s.io/primary=true
	I0829 18:56:28.907190   18990 ops.go:34] apiserver oom_adj: -16
	I0829 18:56:29.042348   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:29.543055   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:30.042999   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:30.542653   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:31.042960   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:31.542779   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:32.042560   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:32.543114   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:33.043338   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:33.542400   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:33.657610   18990 kubeadm.go:1113] duration metric: took 4.778691649s to wait for elevateKubeSystemPrivileges
	I0829 18:56:33.657651   18990 kubeadm.go:394] duration metric: took 14.056694589s to StartCluster
	I0829 18:56:33.657673   18990 settings.go:142] acquiring lock: {Name:mka4cd5ddff5796cd0ca11509c181178f4f73529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:33.657802   18990 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 18:56:33.658294   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:33.658498   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0829 18:56:33.658563   18990 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:56:33.658614   18990 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0829 18:56:33.658712   18990 addons.go:69] Setting yakd=true in profile "addons-344587"
	I0829 18:56:33.658734   18990 config.go:182] Loaded profile config "addons-344587": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:56:33.658743   18990 addons.go:234] Setting addon yakd=true in "addons-344587"
	I0829 18:56:33.658752   18990 addons.go:69] Setting helm-tiller=true in profile "addons-344587"
	I0829 18:56:33.658761   18990 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-344587"
	I0829 18:56:33.658774   18990 addons.go:69] Setting registry=true in profile "addons-344587"
	I0829 18:56:33.658781   18990 addons.go:69] Setting gcp-auth=true in profile "addons-344587"
	I0829 18:56:33.658782   18990 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-344587"
	I0829 18:56:33.658779   18990 addons.go:69] Setting cloud-spanner=true in profile "addons-344587"
	I0829 18:56:33.658790   18990 addons.go:69] Setting volumesnapshots=true in profile "addons-344587"
	I0829 18:56:33.658799   18990 mustload.go:65] Loading cluster: addons-344587
	I0829 18:56:33.658800   18990 addons.go:234] Setting addon registry=true in "addons-344587"
	I0829 18:56:33.658807   18990 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-344587"
	I0829 18:56:33.658811   18990 addons.go:234] Setting addon cloud-spanner=true in "addons-344587"
	I0829 18:56:33.658813   18990 addons.go:234] Setting addon volumesnapshots=true in "addons-344587"
	I0829 18:56:33.658831   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.658833   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.658834   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.658837   18990 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-344587"
	I0829 18:56:33.658846   18990 addons.go:69] Setting ingress=true in profile "addons-344587"
	I0829 18:56:33.658863   18990 addons.go:234] Setting addon ingress=true in "addons-344587"
	I0829 18:56:33.658865   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.658889   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.658925   18990 config.go:182] Loaded profile config "addons-344587": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:56:33.658836   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.658783   18990 addons.go:69] Setting volcano=true in profile "addons-344587"
	I0829 18:56:33.659252   18990 addons.go:234] Setting addon volcano=true in "addons-344587"
	I0829 18:56:33.659252   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.659267   18990 addons.go:69] Setting storage-provisioner=true in profile "addons-344587"
	I0829 18:56:33.659273   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.659282   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659289   18990 addons.go:234] Setting addon storage-provisioner=true in "addons-344587"
	I0829 18:56:33.659291   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.659309   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.659322   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.659337   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659368   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.659400   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659432   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.659479   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659549   18990 addons.go:234] Setting addon helm-tiller=true in "addons-344587"
	I0829 18:56:33.658761   18990 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-344587"
	I0829 18:56:33.659610   18990 addons.go:69] Setting inspektor-gadget=true in profile "addons-344587"
	I0829 18:56:33.659640   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.659688   18990 addons.go:234] Setting addon inspektor-gadget=true in "addons-344587"
	I0829 18:56:33.659310   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659869   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.660003   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.660033   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659615   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.660105   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.660245   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.660276   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659252   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.660606   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659251   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.661085   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.658772   18990 addons.go:69] Setting default-storageclass=true in profile "addons-344587"
	I0829 18:56:33.670709   18990 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-344587"
	I0829 18:56:33.658775   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.659589   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.670898   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659619   18990 addons.go:69] Setting ingress-dns=true in profile "addons-344587"
	I0829 18:56:33.671005   18990 addons.go:234] Setting addon ingress-dns=true in "addons-344587"
	I0829 18:56:33.659626   18990 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-344587"
	I0829 18:56:33.671326   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.671369   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.671373   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.671404   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.671440   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659631   18990 addons.go:69] Setting metrics-server=true in profile "addons-344587"
	I0829 18:56:33.671513   18990 addons.go:234] Setting addon metrics-server=true in "addons-344587"
	I0829 18:56:33.671545   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.666650   18990 out.go:177] * Verifying Kubernetes components...
	I0829 18:56:33.671400   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.671874   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.671911   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.671053   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.673380   18990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:56:33.680875   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43131
	I0829 18:56:33.681397   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.682001   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.682020   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.682079   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43679
	I0829 18:56:33.682558   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.682826   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33189
	I0829 18:56:33.683408   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.683427   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.683497   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.683572   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.684019   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.684047   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.684281   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.684297   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.684418   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.684576   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42785
	I0829 18:56:33.684658   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.685239   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.686589   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.686991   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.687043   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.687652   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.687695   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.693010   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41217
	I0829 18:56:33.693441   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.693863   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.693883   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.694222   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.695000   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.695017   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.695025   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.695047   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.695732   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.696286   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.696303   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.696680   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.697196   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.697227   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.708509   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35881
	I0829 18:56:33.709301   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.710007   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.710025   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.710503   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.711094   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.711134   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.712697   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38887
	I0829 18:56:33.713230   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.713833   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.713849   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.714249   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.714858   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.714894   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.717065   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40851
	I0829 18:56:33.718427   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40143
	I0829 18:56:33.719007   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.719015   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.719564   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.719572   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.719583   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.719587   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.719642   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34313
	I0829 18:56:33.719977   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.720035   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.720529   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.720541   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.720576   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.720813   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41587
	I0829 18:56:33.721306   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.721600   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.721641   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.721789   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.721802   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.722188   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.722210   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.722517   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.722694   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.722698   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.722766   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40775
	I0829 18:56:33.722933   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.723536   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.723992   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.724014   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.724373   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.724931   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.724969   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.727773   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42855
	I0829 18:56:33.728271   18990 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-344587"
	I0829 18:56:33.728315   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.728668   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.728713   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.729106   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.730100   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.730122   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.730424   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.731053   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.731091   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.733084   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34629
	I0829 18:56:33.733526   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.733981   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.734008   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.734333   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.734516   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.737054   18990 addons.go:234] Setting addon default-storageclass=true in "addons-344587"
	I0829 18:56:33.737093   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.737442   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.737488   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.738915   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39099
	I0829 18:56:33.739299   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.739833   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.739858   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.740211   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.740415   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.740698   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44413
	I0829 18:56:33.742219   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.742936   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.743473   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.743489   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.743850   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.744037   18990 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0829 18:56:33.744401   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.744444   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.746700   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39535
	I0829 18:56:33.747098   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.747483   18990 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:56:33.747541   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.747555   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.747849   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.748004   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.749857   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.750156   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:33.750162   18990 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:56:33.750176   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:33.750401   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:33.750415   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:33.750424   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:33.750431   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:33.751640   18990 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 18:56:33.751661   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0829 18:56:33.751678   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.753152   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41841
	I0829 18:56:33.753646   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.754124   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.754140   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.754697   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:33.754792   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.754874   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:33.754882   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	W0829 18:56:33.754961   18990 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0829 18:56:33.755222   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.756506   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.757125   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.757153   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.757371   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.757578   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.757800   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.758075   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.758388   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43277
	I0829 18:56:33.758562   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.758742   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.759174   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.759196   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.759539   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.759780   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.760361   18990 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0829 18:56:33.761354   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.761672   18990 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0829 18:56:33.761690   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0829 18:56:33.761708   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.762934   18990 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0829 18:56:33.764151   18990 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0829 18:56:33.764172   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0829 18:56:33.764189   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.765442   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.766030   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.766066   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.766227   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.766373   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.766477   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.766582   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.769791   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.770164   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40387
	I0829 18:56:33.770308   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.770326   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.770497   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.770711   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.770718   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.770896   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.770957   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44035
	I0829 18:56:33.771212   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.771224   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.771288   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.771479   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.772071   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.772254   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.772754   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34945
	I0829 18:56:33.773389   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.773405   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.773886   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.774057   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.774157   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.774715   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.774741   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.775380   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.775427   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.775749   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45059
	I0829 18:56:33.775868   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.776433   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.776878   18990 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0829 18:56:33.777144   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.777191   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.777462   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.777480   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.777870   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.778380   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.779809   18990 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0829 18:56:33.780037   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.780244   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35549
	I0829 18:56:33.780742   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.781222   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.781248   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.781369   18990 out.go:177]   - Using image docker.io/registry:2.8.3
	I0829 18:56:33.781563   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.781761   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.782643   18990 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0829 18:56:33.783896   18990 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0829 18:56:33.783960   18990 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0829 18:56:33.784513   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39187
	I0829 18:56:33.784537   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44109
	I0829 18:56:33.784631   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.785647   18990 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0829 18:56:33.785668   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0829 18:56:33.785684   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.786509   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35783
	I0829 18:56:33.786516   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.786797   18990 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0829 18:56:33.786858   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.787121   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.787314   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.787336   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.787455   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.787473   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.787695   18990 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0829 18:56:33.787783   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.787912   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.787932   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.788077   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.788137   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.788311   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.788468   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.788893   18990 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0829 18:56:33.788909   18990 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0829 18:56:33.788928   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.788930   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.788964   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.789455   18990 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0829 18:56:33.790036   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.790306   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.791044   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.791435   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.791870   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.791965   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.792018   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.792283   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.792425   18990 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0829 18:56:33.792449   18990 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0829 18:56:33.792452   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.794946   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.794990   18990 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0829 18:56:33.795455   18990 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0829 18:56:33.795469   18990 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0829 18:56:33.795488   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.796133   18990 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0829 18:56:33.796965   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36103
	I0829 18:56:33.797111   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.797138   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.797278   18990 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0829 18:56:33.797280   18990 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0829 18:56:33.797524   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.797291   18990 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0829 18:56:33.797627   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.798327   18990 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0829 18:56:33.798342   18990 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0829 18:56:33.798363   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.798941   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.798951   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.799251   18990 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:56:33.799263   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0829 18:56:33.799281   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.799598   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.800343   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.800449   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.800465   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.801716   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.801738   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36645
	I0829 18:56:33.801793   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38425
	I0829 18:56:33.802890   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.803204   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.803825   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.803851   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.805182   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.805201   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.805215   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39841
	I0829 18:56:33.805244   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.805256   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.805307   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.805329   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.805337   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.805789   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.805807   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.805811   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.805825   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.805840   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.805855   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.805882   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.806005   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.806215   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.806223   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.806256   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.806267   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.806359   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.806379   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.806397   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.806440   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.806668   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.806692   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.806708   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.806860   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.806874   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.806956   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.807013   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.807169   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.807174   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.807190   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.807219   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.807331   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.807354   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.807446   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.807495   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.807556   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.807727   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.808112   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.808136   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.809511   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.810029   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.811278   18990 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0829 18:56:33.812241   18990 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 18:56:33.813129   18990 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 18:56:33.813157   18990 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 18:56:33.813176   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.813977   18990 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:56:33.814001   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 18:56:33.814020   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.816395   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.816894   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.816918   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.817062   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.817195   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46807
	I0829 18:56:33.817338   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.817486   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.817534   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.817610   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.817837   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.818145   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.818173   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.818354   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.818387   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.818455   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.818655   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.818809   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.818859   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.818932   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.819357   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.820909   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	W0829 18:56:33.822691   18990 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0829 18:56:33.822718   18990 retry.go:31] will retry after 259.989848ms: ssh: handshake failed: EOF
	I0829 18:56:33.823215   18990 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0829 18:56:33.824471   18990 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 18:56:33.824488   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0829 18:56:33.824506   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.827030   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37461
	I0829 18:56:33.827117   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35417
	I0829 18:56:33.827401   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.827435   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.827587   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.827788   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.827812   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.827906   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.827920   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.828022   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.828075   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.828084   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.828178   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.828317   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.828370   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.828385   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.828540   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.828549   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.828724   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.830078   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.830313   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.830515   18990 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 18:56:33.830530   18990 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 18:56:33.830569   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.831752   18990 out.go:177]   - Using image docker.io/busybox:stable
	I0829 18:56:33.832911   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.833222   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.833244   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.833366   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.833520   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.833767   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.833891   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.834507   18990 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	W0829 18:56:33.835031   18990 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36686->192.168.39.172:22: read: connection reset by peer
	I0829 18:56:33.835058   18990 retry.go:31] will retry after 162.890781ms: ssh: handshake failed: read tcp 192.168.39.1:36686->192.168.39.172:22: read: connection reset by peer
	I0829 18:56:33.835835   18990 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:56:33.835851   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0829 18:56:33.835865   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.838727   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.839112   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.839133   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.839296   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.839446   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.839562   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.839657   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	W0829 18:56:33.848450   18990 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36700->192.168.39.172:22: read: connection reset by peer
	I0829 18:56:33.848470   18990 retry.go:31] will retry after 306.282122ms: ssh: handshake failed: read tcp 192.168.39.1:36700->192.168.39.172:22: read: connection reset by peer
	W0829 18:56:33.999144   18990 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36716->192.168.39.172:22: read: connection reset by peer
	I0829 18:56:33.999169   18990 retry.go:31] will retry after 424.61405ms: ssh: handshake failed: read tcp 192.168.39.1:36716->192.168.39.172:22: read: connection reset by peer
	I0829 18:56:34.150588   18990 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0829 18:56:34.150609   18990 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0829 18:56:34.160016   18990 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0829 18:56:34.160035   18990 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0829 18:56:34.209643   18990 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0829 18:56:34.209668   18990 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0829 18:56:34.212743   18990 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0829 18:56:34.212768   18990 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0829 18:56:34.219438   18990 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0829 18:56:34.219459   18990 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0829 18:56:34.225374   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:56:34.251243   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 18:56:34.341500   18990 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 18:56:34.341523   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0829 18:56:34.345542   18990 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0829 18:56:34.345561   18990 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0829 18:56:34.360911   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 18:56:34.367165   18990 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:56:34.367441   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0829 18:56:34.408618   18990 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:56:34.408647   18990 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0829 18:56:34.412998   18990 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0829 18:56:34.413014   18990 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0829 18:56:34.414485   18990 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0829 18:56:34.414505   18990 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0829 18:56:34.416656   18990 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0829 18:56:34.416674   18990 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0829 18:56:34.421178   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0829 18:56:34.441682   18990 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0829 18:56:34.441714   18990 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0829 18:56:34.537974   18990 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:56:34.537995   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0829 18:56:34.574614   18990 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 18:56:34.574648   18990 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 18:56:34.590779   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:56:34.613418   18990 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0829 18:56:34.613450   18990 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0829 18:56:34.621890   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:56:34.650966   18990 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0829 18:56:34.650990   18990 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0829 18:56:34.656296   18990 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0829 18:56:34.656330   18990 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0829 18:56:34.662014   18990 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0829 18:56:34.662031   18990 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0829 18:56:34.755855   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:56:34.777852   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:56:34.841238   18990 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0829 18:56:34.841264   18990 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0829 18:56:34.860870   18990 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0829 18:56:34.860892   18990 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0829 18:56:34.865493   18990 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:56:34.865518   18990 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 18:56:34.891256   18990 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:56:34.891275   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0829 18:56:34.918114   18990 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0829 18:56:34.918134   18990 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0829 18:56:35.013931   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 18:56:35.035330   18990 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:56:35.035353   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0829 18:56:35.037518   18990 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0829 18:56:35.037536   18990 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0829 18:56:35.083605   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:56:35.084671   18990 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0829 18:56:35.084696   18990 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0829 18:56:35.109523   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:56:35.214128   18990 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0829 18:56:35.214165   18990 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0829 18:56:35.300619   18990 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0829 18:56:35.300639   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0829 18:56:35.319430   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:56:35.382473   18990 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:56:35.382493   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0829 18:56:35.500157   18990 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0829 18:56:35.500185   18990 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0829 18:56:35.637217   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:56:35.652837   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.401553749s)
	I0829 18:56:35.652892   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:35.652903   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:35.652993   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.427586081s)
	I0829 18:56:35.653037   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:35.653049   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:35.653235   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:35.653247   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:35.653256   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:35.653263   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:35.653306   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:35.653327   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:35.653336   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:35.653344   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:35.653512   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:35.653530   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:35.653545   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:35.653571   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:35.653594   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:35.653607   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:35.707655   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:35.707678   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:35.708069   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:35.708091   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:35.708109   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:35.793211   18990 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0829 18:56:35.793237   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0829 18:56:35.962496   18990 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0829 18:56:35.962518   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0829 18:56:36.268015   18990 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:56:36.268034   18990 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0829 18:56:36.448977   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:56:40.901860   18990 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0829 18:56:40.901896   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:40.905410   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:40.905896   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:40.905938   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:40.906115   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:40.906299   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:40.906451   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:40.906626   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:41.427728   18990 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0829 18:56:41.620090   18990 addons.go:234] Setting addon gcp-auth=true in "addons-344587"
	I0829 18:56:41.620153   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:41.620485   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:41.620517   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:41.636187   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44453
	I0829 18:56:41.636532   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:41.637102   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:41.637131   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:41.637428   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:41.638003   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:41.638024   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:41.669951   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44101
	I0829 18:56:41.670306   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:41.670883   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:41.670910   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:41.671238   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:41.671489   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:41.673184   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:41.673393   18990 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0829 18:56:41.673419   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:41.676410   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:41.676882   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:41.676905   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:41.677058   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:41.677211   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:41.677367   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:41.677483   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:42.669306   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.308358481s)
	I0829 18:56:42.669349   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.669347   18990 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.301873517s)
	I0829 18:56:42.669360   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.669375   18990 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0829 18:56:42.669424   18990 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.302234847s)
	I0829 18:56:42.669452   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.248249274s)
	I0829 18:56:42.669475   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.669484   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.669555   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.078738309s)
	I0829 18:56:42.669594   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.669607   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.669707   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.047793159s)
	I0829 18:56:42.669725   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.669732   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.669822   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.913945077s)
	I0829 18:56:42.669839   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.669846   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.669914   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.892029945s)
	I0829 18:56:42.669930   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.669939   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.669947   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.669962   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.669971   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.669978   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.670011   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.656061298s)
	I0829 18:56:42.670027   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.670034   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.670035   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.670044   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.670052   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.670058   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.669916   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.670134   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.586501236s)
	I0829 18:56:42.670151   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.670158   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.670232   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.560685281s)
	I0829 18:56:42.670257   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.670266   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.670396   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.35093677s)
	W0829 18:56:42.670434   18990 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:56:42.670454   18990 retry.go:31] will retry after 369.342261ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:56:42.670554   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.670555   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.033295535s)
	I0829 18:56:42.670551   18990 node_ready.go:35] waiting up to 6m0s for node "addons-344587" to be "Ready" ...
	I0829 18:56:42.670566   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.670575   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.670576   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.670583   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.670595   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.670666   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.670668   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.670689   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.670690   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.670697   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.670698   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.670706   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.670706   18990 addons.go:475] Verifying addon ingress=true in "addons-344587"
	I0829 18:56:42.670713   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.671349   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.671369   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.671394   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.671401   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.671409   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.671418   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.671484   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.671504   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.671511   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.671518   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.671524   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.671559   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.671575   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.671582   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.671589   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.671595   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.671628   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.671644   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.671692   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.673185   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.673214   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.673219   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.673225   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.673232   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.674001   18990 out.go:177] * Verifying ingress addon...
	I0829 18:56:42.674369   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.674390   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.674394   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.674400   18990 addons.go:475] Verifying addon registry=true in "addons-344587"
	I0829 18:56:42.674749   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.674780   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.674788   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.675112   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.675155   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.675162   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.675170   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.675177   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.675236   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.675254   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.675260   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.675267   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.675273   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.675309   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.675327   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.675334   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.675407   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.675498   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.675505   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.675737   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.675746   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.675813   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.675820   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.676275   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.676288   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.676537   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.676546   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.676554   18990 addons.go:475] Verifying addon metrics-server=true in "addons-344587"
	I0829 18:56:42.677460   18990 out.go:177] * Verifying registry addon...
	I0829 18:56:42.678505   18990 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0829 18:56:42.678573   18990 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-344587 service yakd-dashboard -n yakd-dashboard
	
	I0829 18:56:42.679342   18990 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0829 18:56:42.721945   18990 node_ready.go:49] node "addons-344587" has status "Ready":"True"
	I0829 18:56:42.721968   18990 node_ready.go:38] duration metric: took 51.397004ms for node "addons-344587" to be "Ready" ...
	I0829 18:56:42.721979   18990 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:56:42.738146   18990 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0829 18:56:42.738171   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:42.738232   18990 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0829 18:56:42.738245   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:42.763259   18990 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fljpw" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:42.780817   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.780844   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.781106   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.781123   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:43.040019   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:56:43.181593   18990 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-344587" context rescaled to 1 replicas
	I0829 18:56:43.200300   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:43.202392   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:43.854515   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:43.855345   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:44.201644   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:44.202443   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:44.387997   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.938966875s)
	I0829 18:56:44.388024   18990 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.714610192s)
	I0829 18:56:44.388060   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:44.388076   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:44.388398   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:44.388399   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:44.388423   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:44.388436   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:44.388446   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:44.388660   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:44.388693   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:44.388708   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:44.388728   18990 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-344587"
	I0829 18:56:44.390289   18990 out.go:177] * Verifying csi-hostpath-driver addon...
	I0829 18:56:44.390333   18990 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:56:44.391916   18990 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0829 18:56:44.392479   18990 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0829 18:56:44.393017   18990 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0829 18:56:44.393047   18990 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0829 18:56:44.437296   18990 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0829 18:56:44.437316   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:44.516363   18990 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0829 18:56:44.516386   18990 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0829 18:56:44.625475   18990 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:56:44.625495   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0829 18:56:44.694404   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:56:44.710971   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:44.714864   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:44.801891   18990 pod_ready.go:103] pod "coredns-6f6b679f8f-fljpw" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:44.896516   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:45.184541   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:45.185775   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:45.398848   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:45.581703   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.541635237s)
	I0829 18:56:45.581752   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:45.581767   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:45.582058   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:45.582084   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:45.582095   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:45.582102   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:45.582151   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:45.582390   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:45.582429   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:45.582481   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:45.685365   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:45.685844   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:45.900158   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:46.159769   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.465331058s)
	I0829 18:56:46.159816   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:46.159836   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:46.160177   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:46.160214   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:46.160224   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:46.160233   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:46.160240   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:46.160479   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:46.160495   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:46.160504   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:46.161466   18990 addons.go:475] Verifying addon gcp-auth=true in "addons-344587"
	I0829 18:56:46.163120   18990 out.go:177] * Verifying gcp-auth addon...
	I0829 18:56:46.165519   18990 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0829 18:56:46.185169   18990 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0829 18:56:46.185187   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:46.226445   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:46.226499   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:46.398205   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:46.671641   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:46.687495   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:46.688298   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:46.899177   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:47.168846   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:47.269193   18990 pod_ready.go:103] pod "coredns-6f6b679f8f-fljpw" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:47.270024   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:47.270716   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:47.398276   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:47.669190   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:47.682897   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:47.683324   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:47.897509   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:48.169032   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:48.184274   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:48.184383   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:48.397220   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:48.669220   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:48.682380   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:48.683472   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:48.896581   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:49.175691   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:49.183464   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:49.184739   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:49.635636   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:49.636948   18990 pod_ready.go:103] pod "coredns-6f6b679f8f-fljpw" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:49.668411   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:49.682494   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:49.683366   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:49.896900   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:50.169141   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:50.182913   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:50.183797   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:50.397440   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:50.669246   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:50.682992   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:50.683296   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:50.766028   18990 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-fljpw" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-fljpw" not found
	I0829 18:56:50.766050   18990 pod_ready.go:82] duration metric: took 8.00276735s for pod "coredns-6f6b679f8f-fljpw" in "kube-system" namespace to be "Ready" ...
	E0829 18:56:50.766059   18990 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-fljpw" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-fljpw" not found
	I0829 18:56:50.766065   18990 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-t9nhw" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.771805   18990 pod_ready.go:93] pod "coredns-6f6b679f8f-t9nhw" in "kube-system" namespace has status "Ready":"True"
	I0829 18:56:50.771843   18990 pod_ready.go:82] duration metric: took 5.770841ms for pod "coredns-6f6b679f8f-t9nhw" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.771858   18990 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-344587" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.778901   18990 pod_ready.go:93] pod "etcd-addons-344587" in "kube-system" namespace has status "Ready":"True"
	I0829 18:56:50.778924   18990 pod_ready.go:82] duration metric: took 7.055033ms for pod "etcd-addons-344587" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.778933   18990 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-344587" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.787991   18990 pod_ready.go:93] pod "kube-apiserver-addons-344587" in "kube-system" namespace has status "Ready":"True"
	I0829 18:56:50.788017   18990 pod_ready.go:82] duration metric: took 9.072661ms for pod "kube-apiserver-addons-344587" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.788030   18990 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-344587" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.795671   18990 pod_ready.go:93] pod "kube-controller-manager-addons-344587" in "kube-system" namespace has status "Ready":"True"
	I0829 18:56:50.795689   18990 pod_ready.go:82] duration metric: took 7.649451ms for pod "kube-controller-manager-addons-344587" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.795700   18990 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lgcxw" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.898617   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:50.968239   18990 pod_ready.go:93] pod "kube-proxy-lgcxw" in "kube-system" namespace has status "Ready":"True"
	I0829 18:56:50.968267   18990 pod_ready.go:82] duration metric: took 172.559179ms for pod "kube-proxy-lgcxw" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.968280   18990 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-344587" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:51.170579   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:51.183357   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:51.183460   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:51.367505   18990 pod_ready.go:93] pod "kube-scheduler-addons-344587" in "kube-system" namespace has status "Ready":"True"
	I0829 18:56:51.367538   18990 pod_ready.go:82] duration metric: took 399.24913ms for pod "kube-scheduler-addons-344587" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:51.367550   18990 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:51.397192   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:51.761866   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:51.761991   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:51.762363   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:51.896480   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:52.169660   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:52.186439   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:52.186848   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:52.397676   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:52.669400   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:52.682397   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:52.682617   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:52.897411   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:53.169721   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:53.184323   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:53.184737   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:53.374149   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:53.397202   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:53.668898   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:53.682445   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:53.682704   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:53.897088   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:54.172913   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:54.182087   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:54.184118   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:54.397034   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:54.669527   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:54.683280   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:54.683838   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:54.897205   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:55.169589   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:55.183889   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:55.184209   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:55.375125   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:55.397322   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:55.668681   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:55.682670   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:55.683015   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:56.069249   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:56.168996   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:56.183093   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:56.183177   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:56.397533   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:56.670051   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:56.682495   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:56.683368   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:56.897459   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:57.169182   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:57.183116   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:57.184347   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:57.376144   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:57.397074   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:57.670186   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:57.683268   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:57.684614   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:57.897006   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:58.170523   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:58.183070   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:58.183215   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:58.396251   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:58.670888   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:58.779673   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:58.780745   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:58.902984   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:59.172390   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:59.183921   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:59.186092   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:59.396707   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:59.669214   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:59.685853   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:59.685887   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:59.873857   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:59.896170   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:00.169557   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:00.183863   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:00.184197   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:00.396380   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:00.669459   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:00.682973   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:00.684561   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:00.897013   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:01.169578   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:01.183325   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:01.183954   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:01.397283   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:01.669328   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:01.683006   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:01.683150   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:01.896786   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:02.169870   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:02.183250   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:02.184348   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:02.395075   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:02.397237   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:02.669855   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:02.686242   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:02.688366   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:02.898895   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:03.169561   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:03.184647   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:03.185289   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:03.397046   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:03.669112   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:03.682276   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:03.682683   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:03.897109   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:04.168963   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:04.182332   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:04.183973   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:04.396975   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:04.669614   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:04.683124   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:04.683347   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:04.873985   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:04.896943   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:05.169958   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:05.183528   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:05.187121   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:05.398577   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:05.669571   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:05.683191   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:05.683832   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:05.896841   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:06.169501   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:06.183520   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:06.184624   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:06.398361   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:06.668833   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:06.683988   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:06.684316   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:06.897127   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:07.169829   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:07.181726   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:07.183130   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:07.374378   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:07.397367   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:07.850766   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:07.851123   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:07.851346   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:07.897642   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:08.169471   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:08.183648   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:08.184288   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:08.396754   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:08.668753   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:08.683569   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:08.683850   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:08.896776   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:09.169838   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:09.184520   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:09.184757   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:09.561873   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:09.567669   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:09.669578   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:09.683274   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:09.683280   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:09.897002   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:10.169044   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:10.181987   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:10.182399   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:10.397928   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:10.669457   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:10.683541   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:10.683807   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:10.896493   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:11.168933   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:11.183346   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:11.184965   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:11.397290   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:11.669440   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:11.682915   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:11.684471   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:11.873117   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:11.896880   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:12.462502   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:12.462504   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:12.462546   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:12.462815   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:12.669512   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:12.682339   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:12.682757   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:12.896921   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:13.169375   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:13.182937   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:13.183252   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:13.396471   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:13.669340   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:13.682743   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:13.683149   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:13.897633   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:14.169741   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:14.183149   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:14.183611   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:14.373233   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:14.397241   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:14.669880   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:14.684196   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:14.685153   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:14.897486   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:15.168735   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:15.182856   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:15.183768   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:15.396668   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:15.669109   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:15.682395   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:15.683471   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:15.896837   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:16.168703   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:16.183373   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:16.185053   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:16.665051   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:16.676846   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:16.766323   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:16.766619   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:16.766726   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:16.901939   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:17.172656   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:17.182759   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:17.182930   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:17.396937   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:17.670486   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:17.683357   18990 kapi.go:107] duration metric: took 35.004012687s to wait for kubernetes.io/minikube-addons=registry ...
	I0829 18:57:17.683499   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:17.896856   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:18.169838   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:18.181982   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:18.398377   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:18.669201   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:18.682926   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:18.873544   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:18.897487   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:19.169391   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:19.182271   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:19.397321   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:19.671702   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:19.683428   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:19.896931   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:20.169472   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:20.184524   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:20.396986   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:20.669107   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:20.682955   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:20.874152   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:20.897581   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:21.169129   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:21.183119   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:21.397077   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:21.670237   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:21.682417   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:21.896731   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:22.169136   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:22.183327   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:22.399358   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:22.669025   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:22.682684   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:22.897209   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:23.168554   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:23.183317   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:23.376737   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:23.398862   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:23.669638   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:23.684290   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:23.896788   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:24.168867   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:24.182641   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:24.397360   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:24.669632   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:24.682814   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:24.896660   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:25.169124   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:25.182227   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:25.397300   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:25.669095   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:25.682571   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:25.877447   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:25.897875   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:26.171514   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:26.182786   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:26.399838   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:26.671735   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:26.683905   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:26.899821   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:27.169555   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:27.182826   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:27.397336   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:27.740885   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:27.742666   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:27.880815   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:27.897328   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:28.168851   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:28.182865   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:28.396062   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:28.669492   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:28.683095   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:28.897589   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:29.169448   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:29.183980   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:29.397702   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:29.670608   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:29.773227   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:29.897807   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:30.169552   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:30.182629   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:30.373870   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:30.396379   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:30.669796   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:30.683989   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:30.897362   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:31.174101   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:31.183759   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:31.396966   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:31.753369   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:31.770022   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:31.897431   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:32.169668   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:32.182893   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:32.374522   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:32.397648   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:32.670261   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:32.685779   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:32.901809   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:33.169762   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:33.184838   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:33.397320   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:33.669836   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:33.681530   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:33.896850   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:34.169041   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:34.182892   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:34.396541   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:34.669142   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:34.683175   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:34.878877   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:34.898182   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:35.169747   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:35.183483   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:35.398901   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:35.670294   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:35.685029   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:35.902939   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:36.171155   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:36.183010   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:36.398954   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:36.669195   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:36.682801   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:36.897585   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:37.168576   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:37.182987   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:37.374713   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:37.397360   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:37.668592   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:37.683096   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:37.896428   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:38.169266   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:38.183407   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:38.396482   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:38.670980   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:38.690197   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:38.896964   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:39.170158   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:39.183345   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:39.407490   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:39.669563   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:39.682556   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:39.874581   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:39.897965   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:40.169903   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:40.183693   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:40.397585   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:40.669944   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:40.698528   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:41.329496   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:41.330614   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:41.331037   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:41.398345   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:41.669869   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:41.682009   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:41.876524   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:41.900286   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:42.169633   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:42.183277   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:42.397178   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:42.669713   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:42.683223   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:42.897441   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:43.169982   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:43.182572   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:43.398170   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:43.670150   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:43.682336   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:43.896982   18990 kapi.go:107] duration metric: took 59.504499728s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0829 18:57:44.169788   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:44.181970   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:44.374399   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:44.670424   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:44.683646   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:45.169286   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:45.182897   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:45.669754   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:45.683182   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:46.170200   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:46.182590   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:46.669378   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:46.682597   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:46.873706   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:47.169378   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:47.183205   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:47.669917   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:47.681862   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:48.170226   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:48.182041   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:48.668676   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:48.682964   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:48.875193   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:49.179977   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:49.188747   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:49.669429   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:49.682463   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:50.169368   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:50.183100   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:50.669811   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:50.683376   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:51.169326   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:51.182850   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:51.373942   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:52.006081   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:52.006844   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:52.170628   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:52.181892   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:52.669274   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:52.682776   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:53.169297   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:53.183257   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:53.374184   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:53.670600   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:53.682938   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:54.170077   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:54.182248   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:54.670362   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:54.682906   18990 kapi.go:107] duration metric: took 1m12.004398431s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0829 18:57:55.191112   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:55.678304   18990 kapi.go:107] duration metric: took 1m9.512783124s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0829 18:57:55.680462   18990 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-344587 cluster.
	I0829 18:57:55.681796   18990 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0829 18:57:55.683065   18990 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0829 18:57:55.684301   18990 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, cloud-spanner, ingress-dns, inspektor-gadget, helm-tiller, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0829 18:57:55.685410   18990 addons.go:510] duration metric: took 1m22.026796458s for enable addons: enabled=[nvidia-device-plugin default-storageclass cloud-spanner ingress-dns inspektor-gadget helm-tiller storage-provisioner metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0829 18:57:55.873642   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:57.873758   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:58:00.374030   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:58:02.410643   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:58:04.873737   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:58:07.374926   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:58:09.873926   18990 pod_ready.go:93] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"True"
	I0829 18:58:09.873948   18990 pod_ready.go:82] duration metric: took 1m18.506392284s for pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace to be "Ready" ...
	I0829 18:58:09.873961   18990 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-z559z" in "kube-system" namespace to be "Ready" ...
	I0829 18:58:09.879351   18990 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-z559z" in "kube-system" namespace has status "Ready":"True"
	I0829 18:58:09.879368   18990 pod_ready.go:82] duration metric: took 5.400164ms for pod "nvidia-device-plugin-daemonset-z559z" in "kube-system" namespace to be "Ready" ...
	I0829 18:58:09.879384   18990 pod_ready.go:39] duration metric: took 1m27.157397179s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:58:09.879399   18990 api_server.go:52] waiting for apiserver process to appear ...
	I0829 18:58:09.879429   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 18:58:09.879478   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 18:58:09.930035   18990 cri.go:89] found id: "ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24"
	I0829 18:58:09.930059   18990 cri.go:89] found id: ""
	I0829 18:58:09.930070   18990 logs.go:276] 1 containers: [ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24]
	I0829 18:58:09.930131   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:09.934705   18990 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 18:58:09.934774   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 18:58:09.974110   18990 cri.go:89] found id: "3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459"
	I0829 18:58:09.974133   18990 cri.go:89] found id: ""
	I0829 18:58:09.974142   18990 logs.go:276] 1 containers: [3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459]
	I0829 18:58:09.974198   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:09.978660   18990 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 18:58:09.978721   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 18:58:10.017468   18990 cri.go:89] found id: "edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1"
	I0829 18:58:10.017489   18990 cri.go:89] found id: ""
	I0829 18:58:10.017499   18990 logs.go:276] 1 containers: [edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1]
	I0829 18:58:10.017546   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:10.022568   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 18:58:10.022633   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 18:58:10.066173   18990 cri.go:89] found id: "46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef"
	I0829 18:58:10.066193   18990 cri.go:89] found id: ""
	I0829 18:58:10.066200   18990 logs.go:276] 1 containers: [46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef]
	I0829 18:58:10.066254   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:10.071876   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 18:58:10.071927   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 18:58:10.113139   18990 cri.go:89] found id: "e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565"
	I0829 18:58:10.113158   18990 cri.go:89] found id: ""
	I0829 18:58:10.113164   18990 logs.go:276] 1 containers: [e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565]
	I0829 18:58:10.113210   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:10.117643   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 18:58:10.117707   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 18:58:10.173282   18990 cri.go:89] found id: "79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24"
	I0829 18:58:10.173301   18990 cri.go:89] found id: ""
	I0829 18:58:10.173308   18990 logs.go:276] 1 containers: [79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24]
	I0829 18:58:10.173350   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:10.177760   18990 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 18:58:10.177826   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 18:58:10.219010   18990 cri.go:89] found id: ""
	I0829 18:58:10.219040   18990 logs.go:276] 0 containers: []
	W0829 18:58:10.219050   18990 logs.go:278] No container was found matching "kindnet"
	I0829 18:58:10.219062   18990 logs.go:123] Gathering logs for kube-apiserver [ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24] ...
	I0829 18:58:10.219078   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24"
	I0829 18:58:10.277241   18990 logs.go:123] Gathering logs for kube-proxy [e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565] ...
	I0829 18:58:10.277270   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565"
	I0829 18:58:10.323859   18990 logs.go:123] Gathering logs for kube-controller-manager [79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24] ...
	I0829 18:58:10.323886   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24"
	I0829 18:58:10.385553   18990 logs.go:123] Gathering logs for container status ...
	I0829 18:58:10.385580   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 18:58:10.435083   18990 logs.go:123] Gathering logs for CRI-O ...
	I0829 18:58:10.435110   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 18:58:11.402687   18990 logs.go:123] Gathering logs for kubelet ...
	I0829 18:58:11.402729   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 18:58:11.453024   18990 logs.go:138] Found kubelet problem: Aug 29 18:56:33 addons-344587 kubelet[1210]: W0829 18:56:33.791012    1210 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-344587" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-344587' and this object
	W0829 18:58:11.453296   18990 logs.go:138] Found kubelet problem: Aug 29 18:56:33 addons-344587 kubelet[1210]: E0829 18:56:33.791070    1210 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-344587\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-344587' and this object" logger="UnhandledError"
	I0829 18:58:11.488836   18990 logs.go:123] Gathering logs for dmesg ...
	I0829 18:58:11.488870   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 18:58:11.504148   18990 logs.go:123] Gathering logs for describe nodes ...
	I0829 18:58:11.504172   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 18:58:11.643790   18990 logs.go:123] Gathering logs for etcd [3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459] ...
	I0829 18:58:11.643818   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459"
	I0829 18:58:11.726389   18990 logs.go:123] Gathering logs for coredns [edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1] ...
	I0829 18:58:11.726425   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1"
	I0829 18:58:11.766070   18990 logs.go:123] Gathering logs for kube-scheduler [46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef] ...
	I0829 18:58:11.766094   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef"
	I0829 18:58:11.811796   18990 out.go:358] Setting ErrFile to fd 2...
	I0829 18:58:11.811817   18990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 18:58:11.811865   18990 out.go:270] X Problems detected in kubelet:
	W0829 18:58:11.811879   18990 out.go:270]   Aug 29 18:56:33 addons-344587 kubelet[1210]: W0829 18:56:33.791012    1210 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-344587" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-344587' and this object
	W0829 18:58:11.811890   18990 out.go:270]   Aug 29 18:56:33 addons-344587 kubelet[1210]: E0829 18:56:33.791070    1210 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-344587\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-344587' and this object" logger="UnhandledError"
	I0829 18:58:11.811902   18990 out.go:358] Setting ErrFile to fd 2...
	I0829 18:58:11.811911   18990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:58:21.813233   18990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:58:21.832761   18990 api_server.go:72] duration metric: took 1m48.174160591s to wait for apiserver process to appear ...
	I0829 18:58:21.832788   18990 api_server.go:88] waiting for apiserver healthz status ...
	I0829 18:58:21.832817   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 18:58:21.832862   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 18:58:21.873058   18990 cri.go:89] found id: "ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24"
	I0829 18:58:21.873083   18990 cri.go:89] found id: ""
	I0829 18:58:21.873093   18990 logs.go:276] 1 containers: [ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24]
	I0829 18:58:21.873154   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:21.877320   18990 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 18:58:21.877374   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 18:58:21.916655   18990 cri.go:89] found id: "3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459"
	I0829 18:58:21.916684   18990 cri.go:89] found id: ""
	I0829 18:58:21.916692   18990 logs.go:276] 1 containers: [3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459]
	I0829 18:58:21.916736   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:21.920999   18990 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 18:58:21.921045   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 18:58:21.965578   18990 cri.go:89] found id: "edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1"
	I0829 18:58:21.965606   18990 cri.go:89] found id: ""
	I0829 18:58:21.965615   18990 logs.go:276] 1 containers: [edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1]
	I0829 18:58:21.965669   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:21.969756   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 18:58:21.969822   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 18:58:22.017458   18990 cri.go:89] found id: "46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef"
	I0829 18:58:22.017480   18990 cri.go:89] found id: ""
	I0829 18:58:22.017491   18990 logs.go:276] 1 containers: [46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef]
	I0829 18:58:22.017549   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:22.021887   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 18:58:22.021956   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 18:58:22.059660   18990 cri.go:89] found id: "e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565"
	I0829 18:58:22.059684   18990 cri.go:89] found id: ""
	I0829 18:58:22.059693   18990 logs.go:276] 1 containers: [e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565]
	I0829 18:58:22.059748   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:22.063706   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 18:58:22.063759   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 18:58:22.099570   18990 cri.go:89] found id: "79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24"
	I0829 18:58:22.099596   18990 cri.go:89] found id: ""
	I0829 18:58:22.099606   18990 logs.go:276] 1 containers: [79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24]
	I0829 18:58:22.099660   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:22.103920   18990 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 18:58:22.103979   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 18:58:22.140807   18990 cri.go:89] found id: ""
	I0829 18:58:22.140837   18990 logs.go:276] 0 containers: []
	W0829 18:58:22.140849   18990 logs.go:278] No container was found matching "kindnet"
	I0829 18:58:22.140860   18990 logs.go:123] Gathering logs for kube-controller-manager [79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24] ...
	I0829 18:58:22.140874   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24"
	I0829 18:58:22.204452   18990 logs.go:123] Gathering logs for CRI-O ...
	I0829 18:58:22.204483   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 18:58:23.279114   18990 logs.go:123] Gathering logs for describe nodes ...
	I0829 18:58:23.279161   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 18:58:23.396916   18990 logs.go:123] Gathering logs for kube-apiserver [ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24] ...
	I0829 18:58:23.396950   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24"
	I0829 18:58:23.445310   18990 logs.go:123] Gathering logs for etcd [3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459] ...
	I0829 18:58:23.445352   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459"
	I0829 18:58:23.513636   18990 logs.go:123] Gathering logs for coredns [edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1] ...
	I0829 18:58:23.513664   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1"
	I0829 18:58:23.554990   18990 logs.go:123] Gathering logs for kube-scheduler [46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef] ...
	I0829 18:58:23.555020   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef"
	I0829 18:58:23.601432   18990 logs.go:123] Gathering logs for kube-proxy [e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565] ...
	I0829 18:58:23.601464   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565"
	I0829 18:58:23.639619   18990 logs.go:123] Gathering logs for kubelet ...
	I0829 18:58:23.639647   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 18:58:23.690102   18990 logs.go:138] Found kubelet problem: Aug 29 18:56:33 addons-344587 kubelet[1210]: W0829 18:56:33.791012    1210 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-344587" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-344587' and this object
	W0829 18:58:23.690271   18990 logs.go:138] Found kubelet problem: Aug 29 18:56:33 addons-344587 kubelet[1210]: E0829 18:56:33.791070    1210 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-344587\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-344587' and this object" logger="UnhandledError"
	I0829 18:58:23.728666   18990 logs.go:123] Gathering logs for dmesg ...
	I0829 18:58:23.728701   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 18:58:23.743456   18990 logs.go:123] Gathering logs for container status ...
	I0829 18:58:23.743482   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 18:58:23.796892   18990 out.go:358] Setting ErrFile to fd 2...
	I0829 18:58:23.796919   18990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 18:58:23.796981   18990 out.go:270] X Problems detected in kubelet:
	W0829 18:58:23.796994   18990 out.go:270]   Aug 29 18:56:33 addons-344587 kubelet[1210]: W0829 18:56:33.791012    1210 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-344587" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-344587' and this object
	W0829 18:58:23.797004   18990 out.go:270]   Aug 29 18:56:33 addons-344587 kubelet[1210]: E0829 18:56:33.791070    1210 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-344587\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-344587' and this object" logger="UnhandledError"
	I0829 18:58:23.797016   18990 out.go:358] Setting ErrFile to fd 2...
	I0829 18:58:23.797026   18990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:58:33.797922   18990 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I0829 18:58:33.802928   18990 api_server.go:279] https://192.168.39.172:8443/healthz returned 200:
	ok
	I0829 18:58:33.803830   18990 api_server.go:141] control plane version: v1.31.0
	I0829 18:58:33.803850   18990 api_server.go:131] duration metric: took 11.971056831s to wait for apiserver health ...
	I0829 18:58:33.803858   18990 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 18:58:33.803876   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 18:58:33.803917   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 18:58:33.854225   18990 cri.go:89] found id: "ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24"
	I0829 18:58:33.854244   18990 cri.go:89] found id: ""
	I0829 18:58:33.854250   18990 logs.go:276] 1 containers: [ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24]
	I0829 18:58:33.854290   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:33.858238   18990 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 18:58:33.858286   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 18:58:33.900025   18990 cri.go:89] found id: "3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459"
	I0829 18:58:33.900045   18990 cri.go:89] found id: ""
	I0829 18:58:33.900054   18990 logs.go:276] 1 containers: [3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459]
	I0829 18:58:33.900094   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:33.904590   18990 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 18:58:33.904641   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 18:58:33.942867   18990 cri.go:89] found id: "edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1"
	I0829 18:58:33.942888   18990 cri.go:89] found id: ""
	I0829 18:58:33.942895   18990 logs.go:276] 1 containers: [edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1]
	I0829 18:58:33.942953   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:33.947338   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 18:58:33.947388   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 18:58:33.991266   18990 cri.go:89] found id: "46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef"
	I0829 18:58:33.991285   18990 cri.go:89] found id: ""
	I0829 18:58:33.991292   18990 logs.go:276] 1 containers: [46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef]
	I0829 18:58:33.991334   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:33.995550   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 18:58:33.995601   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 18:58:34.034277   18990 cri.go:89] found id: "e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565"
	I0829 18:58:34.034294   18990 cri.go:89] found id: ""
	I0829 18:58:34.034302   18990 logs.go:276] 1 containers: [e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565]
	I0829 18:58:34.034341   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:34.038466   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 18:58:34.038546   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 18:58:34.078562   18990 cri.go:89] found id: "79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24"
	I0829 18:58:34.078579   18990 cri.go:89] found id: ""
	I0829 18:58:34.078586   18990 logs.go:276] 1 containers: [79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24]
	I0829 18:58:34.078630   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:34.083366   18990 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 18:58:34.083423   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 18:58:34.145061   18990 cri.go:89] found id: ""
	I0829 18:58:34.145090   18990 logs.go:276] 0 containers: []
	W0829 18:58:34.145099   18990 logs.go:278] No container was found matching "kindnet"
	I0829 18:58:34.145106   18990 logs.go:123] Gathering logs for kubelet ...
	I0829 18:58:34.145117   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 18:58:34.193492   18990 logs.go:138] Found kubelet problem: Aug 29 18:56:33 addons-344587 kubelet[1210]: W0829 18:56:33.791012    1210 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-344587" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-344587' and this object
	W0829 18:58:34.193696   18990 logs.go:138] Found kubelet problem: Aug 29 18:56:33 addons-344587 kubelet[1210]: E0829 18:56:33.791070    1210 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-344587\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-344587' and this object" logger="UnhandledError"
	I0829 18:58:34.230073   18990 logs.go:123] Gathering logs for kube-scheduler [46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef] ...
	I0829 18:58:34.230109   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef"
	I0829 18:58:34.281725   18990 logs.go:123] Gathering logs for kube-proxy [e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565] ...
	I0829 18:58:34.281758   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565"
	I0829 18:58:34.325201   18990 logs.go:123] Gathering logs for container status ...
	I0829 18:58:34.325228   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 18:58:34.371370   18990 logs.go:123] Gathering logs for CRI-O ...
	I0829 18:58:34.371400   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 18:58:35.159659   18990 logs.go:123] Gathering logs for dmesg ...
	I0829 18:58:35.159722   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 18:58:35.175376   18990 logs.go:123] Gathering logs for describe nodes ...
	I0829 18:58:35.175403   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 18:58:35.302779   18990 logs.go:123] Gathering logs for kube-apiserver [ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24] ...
	I0829 18:58:35.302810   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24"
	I0829 18:58:35.362682   18990 logs.go:123] Gathering logs for etcd [3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459] ...
	I0829 18:58:35.362711   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459"
	I0829 18:58:35.435174   18990 logs.go:123] Gathering logs for coredns [edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1] ...
	I0829 18:58:35.435207   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1"
	I0829 18:58:35.475282   18990 logs.go:123] Gathering logs for kube-controller-manager [79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24] ...
	I0829 18:58:35.475310   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24"
	I0829 18:58:35.539640   18990 out.go:358] Setting ErrFile to fd 2...
	I0829 18:58:35.539666   18990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 18:58:35.539716   18990 out.go:270] X Problems detected in kubelet:
	W0829 18:58:35.539724   18990 out.go:270]   Aug 29 18:56:33 addons-344587 kubelet[1210]: W0829 18:56:33.791012    1210 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-344587" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-344587' and this object
	W0829 18:58:35.539735   18990 out.go:270]   Aug 29 18:56:33 addons-344587 kubelet[1210]: E0829 18:56:33.791070    1210 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-344587\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-344587' and this object" logger="UnhandledError"
	I0829 18:58:35.539748   18990 out.go:358] Setting ErrFile to fd 2...
	I0829 18:58:35.539754   18990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:58:45.550232   18990 system_pods.go:59] 18 kube-system pods found
	I0829 18:58:45.550261   18990 system_pods.go:61] "coredns-6f6b679f8f-t9nhw" [01782eed-98db-4768-8ab6-bd429fe58305] Running
	I0829 18:58:45.550266   18990 system_pods.go:61] "csi-hostpath-attacher-0" [318ff00f-e5be-4029-b58b-30185cb48a7f] Running
	I0829 18:58:45.550269   18990 system_pods.go:61] "csi-hostpath-resizer-0" [ba8fc44d-cd38-469f-8d42-7aedd5d81a06] Running
	I0829 18:58:45.550272   18990 system_pods.go:61] "csi-hostpathplugin-96vz6" [207fbe26-1d1e-48c7-8bfd-4621264e0739] Running
	I0829 18:58:45.550275   18990 system_pods.go:61] "etcd-addons-344587" [332f8ecf-d239-4d45-b8c2-e023c3849b2b] Running
	I0829 18:58:45.550278   18990 system_pods.go:61] "kube-apiserver-addons-344587" [cec380f4-ded8-4496-b6c5-54ebeeecb720] Running
	I0829 18:58:45.550281   18990 system_pods.go:61] "kube-controller-manager-addons-344587" [4812d16d-522f-44e2-b353-798732857218] Running
	I0829 18:58:45.550284   18990 system_pods.go:61] "kube-ingress-dns-minikube" [2aeaeabc-ac3f-4f8a-88ee-84fe5d623dd6] Running
	I0829 18:58:45.550286   18990 system_pods.go:61] "kube-proxy-lgcxw" [0be1dddc-793d-471e-aa16-9752951fb72a] Running
	I0829 18:58:45.550289   18990 system_pods.go:61] "kube-scheduler-addons-344587" [c36a46ec-4466-46f5-ba95-40110040eb06] Running
	I0829 18:58:45.550291   18990 system_pods.go:61] "metrics-server-8988944d9-9tplt" [427d61c8-9ff3-4718-9faf-896d20af6cdc] Running
	I0829 18:58:45.550295   18990 system_pods.go:61] "nvidia-device-plugin-daemonset-z559z" [f30c9660-ea3d-40c2-9842-bcf8bb18c0b6] Running
	I0829 18:58:45.550297   18990 system_pods.go:61] "registry-6fb4cdfc84-dmlc6" [074412f0-2988-4497-a2bb-abd86ddc18ab] Running
	I0829 18:58:45.550300   18990 system_pods.go:61] "registry-proxy-x5bqm" [45f795aa-aca5-41b5-a455-89b285ce9531] Running
	I0829 18:58:45.550303   18990 system_pods.go:61] "snapshot-controller-56fcc65765-8fbbn" [ed961d54-d7a4-485f-bb8e-e7195ed4e80e] Running
	I0829 18:58:45.550307   18990 system_pods.go:61] "snapshot-controller-56fcc65765-gn5lq" [bf5c7495-59fd-4151-abce-7cf6072e995e] Running
	I0829 18:58:45.550309   18990 system_pods.go:61] "storage-provisioner" [14e72aaf-6cd6-4740-a9d5-e4a739fed914] Running
	I0829 18:58:45.550312   18990 system_pods.go:61] "tiller-deploy-b48cc5f79-bxws5" [d2380d68-348a-4dc1-8c40-1a4e9fa6ab04] Running
	I0829 18:58:45.550318   18990 system_pods.go:74] duration metric: took 11.746455029s to wait for pod list to return data ...
	I0829 18:58:45.550328   18990 default_sa.go:34] waiting for default service account to be created ...
	I0829 18:58:45.553072   18990 default_sa.go:45] found service account: "default"
	I0829 18:58:45.553088   18990 default_sa.go:55] duration metric: took 2.755882ms for default service account to be created ...
	I0829 18:58:45.553095   18990 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 18:58:45.559715   18990 system_pods.go:86] 18 kube-system pods found
	I0829 18:58:45.559734   18990 system_pods.go:89] "coredns-6f6b679f8f-t9nhw" [01782eed-98db-4768-8ab6-bd429fe58305] Running
	I0829 18:58:45.559740   18990 system_pods.go:89] "csi-hostpath-attacher-0" [318ff00f-e5be-4029-b58b-30185cb48a7f] Running
	I0829 18:58:45.559744   18990 system_pods.go:89] "csi-hostpath-resizer-0" [ba8fc44d-cd38-469f-8d42-7aedd5d81a06] Running
	I0829 18:58:45.559748   18990 system_pods.go:89] "csi-hostpathplugin-96vz6" [207fbe26-1d1e-48c7-8bfd-4621264e0739] Running
	I0829 18:58:45.559751   18990 system_pods.go:89] "etcd-addons-344587" [332f8ecf-d239-4d45-b8c2-e023c3849b2b] Running
	I0829 18:58:45.559756   18990 system_pods.go:89] "kube-apiserver-addons-344587" [cec380f4-ded8-4496-b6c5-54ebeeecb720] Running
	I0829 18:58:45.559760   18990 system_pods.go:89] "kube-controller-manager-addons-344587" [4812d16d-522f-44e2-b353-798732857218] Running
	I0829 18:58:45.559764   18990 system_pods.go:89] "kube-ingress-dns-minikube" [2aeaeabc-ac3f-4f8a-88ee-84fe5d623dd6] Running
	I0829 18:58:45.559767   18990 system_pods.go:89] "kube-proxy-lgcxw" [0be1dddc-793d-471e-aa16-9752951fb72a] Running
	I0829 18:58:45.559771   18990 system_pods.go:89] "kube-scheduler-addons-344587" [c36a46ec-4466-46f5-ba95-40110040eb06] Running
	I0829 18:58:45.559774   18990 system_pods.go:89] "metrics-server-8988944d9-9tplt" [427d61c8-9ff3-4718-9faf-896d20af6cdc] Running
	I0829 18:58:45.559778   18990 system_pods.go:89] "nvidia-device-plugin-daemonset-z559z" [f30c9660-ea3d-40c2-9842-bcf8bb18c0b6] Running
	I0829 18:58:45.559781   18990 system_pods.go:89] "registry-6fb4cdfc84-dmlc6" [074412f0-2988-4497-a2bb-abd86ddc18ab] Running
	I0829 18:58:45.559785   18990 system_pods.go:89] "registry-proxy-x5bqm" [45f795aa-aca5-41b5-a455-89b285ce9531] Running
	I0829 18:58:45.559791   18990 system_pods.go:89] "snapshot-controller-56fcc65765-8fbbn" [ed961d54-d7a4-485f-bb8e-e7195ed4e80e] Running
	I0829 18:58:45.559794   18990 system_pods.go:89] "snapshot-controller-56fcc65765-gn5lq" [bf5c7495-59fd-4151-abce-7cf6072e995e] Running
	I0829 18:58:45.559797   18990 system_pods.go:89] "storage-provisioner" [14e72aaf-6cd6-4740-a9d5-e4a739fed914] Running
	I0829 18:58:45.559801   18990 system_pods.go:89] "tiller-deploy-b48cc5f79-bxws5" [d2380d68-348a-4dc1-8c40-1a4e9fa6ab04] Running
	I0829 18:58:45.559806   18990 system_pods.go:126] duration metric: took 6.706766ms to wait for k8s-apps to be running ...
	I0829 18:58:45.559815   18990 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 18:58:45.559853   18990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:58:45.577199   18990 system_svc.go:56] duration metric: took 17.376357ms WaitForService to wait for kubelet
	I0829 18:58:45.577228   18990 kubeadm.go:582] duration metric: took 2m11.91863045s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:58:45.577249   18990 node_conditions.go:102] verifying NodePressure condition ...
	I0829 18:58:45.580335   18990 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 18:58:45.580362   18990 node_conditions.go:123] node cpu capacity is 2
	I0829 18:58:45.580377   18990 node_conditions.go:105] duration metric: took 3.122527ms to run NodePressure ...
	I0829 18:58:45.580391   18990 start.go:241] waiting for startup goroutines ...
	I0829 18:58:45.580403   18990 start.go:246] waiting for cluster config update ...
	I0829 18:58:45.580427   18990 start.go:255] writing updated cluster config ...
	I0829 18:58:45.580716   18990 ssh_runner.go:195] Run: rm -f paused
	I0829 18:58:45.628072   18990 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 18:58:45.630291   18990 out.go:177] * Done! kubectl is now configured to use "addons-344587" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 29 19:08:01 addons-344587 crio[658]: time="2024-08-29 19:08:01.372630919Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b1416baf-d595-425f-967b-a62e7d928dff name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:08:01 addons-344587 crio[658]: time="2024-08-29 19:08:01.373058058Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd5a1f6058ea5299e6bb925b62052c1d7775f01a90be4f4f42a7822ed4cac722,PodSandboxId:3f4cd5577a9f02ad3c7d4804dc99e8cceafe2489d2f850a1c61d968cf22623ae,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1724958459920996417,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-d653ba56-6232-4797-9e26-74b3f827dc87,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 56c85c12-3489-4fdb-99ed-671844801335,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf623e64e19a9fb8fd141adbc0d4a9ead11ce1e2e3a4e36260c0b94bce7b6036,PodSandboxId:9bb29d202e81078fef51804bb9150247946cb42a1c9d50409e9dc471c7f2c0d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:50aa4698fa6262977cff89181b2664b99d8a56dbca847bf62f2ef04854597cf8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65ad0d468eb1c558bf7f4e64e790f586e9eda649ee9f130cd0e835b292bbc5ac,State:CONTAINER_EXITED,CreatedAt:1724958455964314289,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2aee8f3-91eb-4f76-8271-9f9c888a8ccf,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179799c8ef39ffd099f99f96772889dd6e9bc1ed9d8b2578f91f7f9ca2a71f4b,PodSandboxId:f93ac7e6adfbe2776fe4ba9d06095c726766cb26c740a4e3167e4dfdedd974e7,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1724958449957845311,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-d653ba56-6232-4797-9e26-74b3f827dc87,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d8511798-b146-4f1b-bd45-2848f8c4dbad,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c8e7bcbfebef995192df5d64f603c9d08c91b1a799ccb037398d00858d33ba,PodSandboxId:fe391f299e153d6c0972dbdf457c92e1ad975a4e3574b15f1ba653832b929f55,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724957874945557865,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-m8795,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 3289f49d-61c4-4693-818b-5a8e73d95410,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5f92ee66e147ab00440c7ce62942db9b0cb067796c3643b3313a45071577e8e,PodSandboxId:23df866ad30759a145dbb923cb1bfcd112787f0af740ca3dfd174e68f818a8aa,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1724957873467950059,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-sxjv5,io.kubernetes.pod.namespace: ingress-ngin
x,io.kubernetes.pod.uid: e6ea94c8-ff1c-47d8-9e9c-6df136e34608,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:51c4b8b338abf63fdf30f1d9ec94d133498fcba3b2a128daf3e6ddc3ea17561f,PodSandboxId:e680fdadb31c415453ba43dcf470ea96ef746193b88c557213800d8afaad820a,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724957850473483849,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8sjph,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5fdd788a-a4b3-4d16-b6e2-4ccfe541fb0a,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:286779fd868c769dcf2e26f927dd181738b4a91e58bec5a8ee5d995db0b917b9,PodSandboxId:7da21d0210ad6cd0ffa6967f2ac3b65b89c32b1f45e2a0250d44952e911ac498,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Ann
otations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724957849713960242,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xkfkw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 26eb4b3d-75b0-44e5-8385-aece13b52992,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b26ab16b48a3f769f2f106bda6289d008bff63fc49defb022216c5474478ff20,PodSandboxId:eef37c081b69856e96050641ba6815b6193a174ac8634c6289ffc3c2ea88b1fa,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce7
5217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724957841287984594,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-4cmx8,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 4917acbf-a341-45c8-acfc-11340fa9a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54164e21bd9a6b72fe31085cb8cbe33df2bbc8670c01312f597d476f78b5d1c,PodSandboxId:05bebeb94a32a8d55a95ed5362201cbf7e480ead93a4fbbbeeee251aac433a08,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8
s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724957825221794572,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-9tplt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427d61c8-9ff3-4718-9faf-896d20af6cdc,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8b2ab20b747ce79ca67305e50785bc96f6ff948c77d73873122f96ac5595f1,PodSandboxId:f8ec6947dca5bd9d40f
58e822f5ce9f7ebb1f41cea06391721e1aff052c6b893,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1724957818569269326,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2aeaeabc-ac3f-4f8a-88ee-84fe5d623dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:15a0a245e481d700614dae7bfc6d7d4bbe9f074615c88e1373f761abb63e08cc,PodSandboxId:caa68615ea5869f55238af371cf096755b420e33837f56a5391355cc5270a453,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724957800386945011,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e72aaf-6cd6-4740-a9d5-e4a739fed914,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGraceP
eriod: 30,},},&Container{Id:edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1,PodSandboxId:78300c884569d8c916441667591ef1a6cdfdc769a616ccf684cd3aeb6ee173a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724957797914552214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t9nhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01782eed-98db-4768-8ab6-bd429fe58305,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565,PodSandboxId:2f0f516b497dee578fa502336c368d6a64299cb73064fc27e94ba33dcd0c9623,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724957793494579868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lgcxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0be1dddc-793d-471e-aa16-9752951fb72a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef,PodSandboxId:bc4dfc643a4f95574b0c4d436e0af5e463ff66fcda536cf28cb9fd9980b55ae3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724957783507566907,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 433e7620b7da2027dd73dc3bddb2f997,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24,PodSandboxId:f9feaafb78b8d9b6c588088d4158b1b980da872d4034937746daf0b1ac5998c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724957783501034123,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df35d1b995ba56a5ea532995ddbeb880,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459,PodSandboxId:5b90ade16a1ec3848984ac4ccc079dbc33282af9bb7be159d955ed993979dd7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724957783509496209,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3105eec1aeed59eaefbe2b389301917,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24,PodSandboxId:948b38ffb05be778e3450b635de34556972385a53111e037a04d34383f52c377,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724957783431784942,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba40dc401b574376e18cc2969d87be7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b1416baf-d595-425f-967b-a62e7d928dff name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:08:01 addons-344587 crio[658]: time="2024-08-29 19:08:01.385295832Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=430fda29-f8db-4314-857f-721dfcf2e3a4 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:08:01 addons-344587 crio[658]: time="2024-08-29 19:08:01.385760514Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=430fda29-f8db-4314-857f-721dfcf2e3a4 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:08:01 addons-344587 crio[658]: time="2024-08-29 19:08:01.438060791Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3974282f-6b80-4733-ae35-41c267bf70e7 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:08:01 addons-344587 crio[658]: time="2024-08-29 19:08:01.438153104Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3974282f-6b80-4733-ae35-41c267bf70e7 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:08:01 addons-344587 crio[658]: time="2024-08-29 19:08:01.439324122Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eefb3270-84a1-43f4-b49a-8b8935306af5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:08:01 addons-344587 crio[658]: time="2024-08-29 19:08:01.440496608Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958481440468031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:536404,},InodesUsed:&UInt64Value{Value:190,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eefb3270-84a1-43f4-b49a-8b8935306af5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:08:01 addons-344587 crio[658]: time="2024-08-29 19:08:01.441515115Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81ce72d1-97e6-41b8-8f56-458ca09982d5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:08:01 addons-344587 crio[658]: time="2024-08-29 19:08:01.441604390Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81ce72d1-97e6-41b8-8f56-458ca09982d5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:08:01 addons-344587 crio[658]: time="2024-08-29 19:08:01.442058821Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd5a1f6058ea5299e6bb925b62052c1d7775f01a90be4f4f42a7822ed4cac722,PodSandboxId:3f4cd5577a9f02ad3c7d4804dc99e8cceafe2489d2f850a1c61d968cf22623ae,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1724958459920996417,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-d653ba56-6232-4797-9e26-74b3f827dc87,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 56c85c12-3489-4fdb-99ed-671844801335,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf623e64e19a9fb8fd141adbc0d4a9ead11ce1e2e3a4e36260c0b94bce7b6036,PodSandboxId:9bb29d202e81078fef51804bb9150247946cb42a1c9d50409e9dc471c7f2c0d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:50aa4698fa6262977cff89181b2664b99d8a56dbca847bf62f2ef04854597cf8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65ad0d468eb1c558bf7f4e64e790f586e9eda649ee9f130cd0e835b292bbc5ac,State:CONTAINER_EXITED,CreatedAt:1724958455964314289,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2aee8f3-91eb-4f76-8271-9f9c888a8ccf,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179799c8ef39ffd099f99f96772889dd6e9bc1ed9d8b2578f91f7f9ca2a71f4b,PodSandboxId:f93ac7e6adfbe2776fe4ba9d06095c726766cb26c740a4e3167e4dfdedd974e7,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1724958449957845311,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-d653ba56-6232-4797-9e26-74b3f827dc87,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d8511798-b146-4f1b-bd45-2848f8c4dbad,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c8e7bcbfebef995192df5d64f603c9d08c91b1a799ccb037398d00858d33ba,PodSandboxId:fe391f299e153d6c0972dbdf457c92e1ad975a4e3574b15f1ba653832b929f55,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724957874945557865,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-m8795,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 3289f49d-61c4-4693-818b-5a8e73d95410,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5f92ee66e147ab00440c7ce62942db9b0cb067796c3643b3313a45071577e8e,PodSandboxId:23df866ad30759a145dbb923cb1bfcd112787f0af740ca3dfd174e68f818a8aa,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1724957873467950059,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-sxjv5,io.kubernetes.pod.namespace: ingress-ngin
x,io.kubernetes.pod.uid: e6ea94c8-ff1c-47d8-9e9c-6df136e34608,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:51c4b8b338abf63fdf30f1d9ec94d133498fcba3b2a128daf3e6ddc3ea17561f,PodSandboxId:e680fdadb31c415453ba43dcf470ea96ef746193b88c557213800d8afaad820a,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724957850473483849,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8sjph,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5fdd788a-a4b3-4d16-b6e2-4ccfe541fb0a,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:286779fd868c769dcf2e26f927dd181738b4a91e58bec5a8ee5d995db0b917b9,PodSandboxId:7da21d0210ad6cd0ffa6967f2ac3b65b89c32b1f45e2a0250d44952e911ac498,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Ann
otations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724957849713960242,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xkfkw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 26eb4b3d-75b0-44e5-8385-aece13b52992,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b26ab16b48a3f769f2f106bda6289d008bff63fc49defb022216c5474478ff20,PodSandboxId:eef37c081b69856e96050641ba6815b6193a174ac8634c6289ffc3c2ea88b1fa,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce7
5217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724957841287984594,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-4cmx8,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 4917acbf-a341-45c8-acfc-11340fa9a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54164e21bd9a6b72fe31085cb8cbe33df2bbc8670c01312f597d476f78b5d1c,PodSandboxId:05bebeb94a32a8d55a95ed5362201cbf7e480ead93a4fbbbeeee251aac433a08,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8
s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724957825221794572,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-9tplt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427d61c8-9ff3-4718-9faf-896d20af6cdc,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8b2ab20b747ce79ca67305e50785bc96f6ff948c77d73873122f96ac5595f1,PodSandboxId:f8ec6947dca5bd9d40f
58e822f5ce9f7ebb1f41cea06391721e1aff052c6b893,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1724957818569269326,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2aeaeabc-ac3f-4f8a-88ee-84fe5d623dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:15a0a245e481d700614dae7bfc6d7d4bbe9f074615c88e1373f761abb63e08cc,PodSandboxId:caa68615ea5869f55238af371cf096755b420e33837f56a5391355cc5270a453,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724957800386945011,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e72aaf-6cd6-4740-a9d5-e4a739fed914,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGraceP
eriod: 30,},},&Container{Id:edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1,PodSandboxId:78300c884569d8c916441667591ef1a6cdfdc769a616ccf684cd3aeb6ee173a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724957797914552214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t9nhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01782eed-98db-4768-8ab6-bd429fe58305,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565,PodSandboxId:2f0f516b497dee578fa502336c368d6a64299cb73064fc27e94ba33dcd0c9623,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724957793494579868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lgcxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0be1dddc-793d-471e-aa16-9752951fb72a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef,PodSandboxId:bc4dfc643a4f95574b0c4d436e0af5e463ff66fcda536cf28cb9fd9980b55ae3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724957783507566907,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 433e7620b7da2027dd73dc3bddb2f997,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24,PodSandboxId:f9feaafb78b8d9b6c588088d4158b1b980da872d4034937746daf0b1ac5998c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724957783501034123,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df35d1b995ba56a5ea532995ddbeb880,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459,PodSandboxId:5b90ade16a1ec3848984ac4ccc079dbc33282af9bb7be159d955ed993979dd7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724957783509496209,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3105eec1aeed59eaefbe2b389301917,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24,PodSandboxId:948b38ffb05be778e3450b635de34556972385a53111e037a04d34383f52c377,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724957783431784942,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba40dc401b574376e18cc2969d87be7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81ce72d1-97e6-41b8-8f56-458ca09982d5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:08:01 addons-344587 crio[658]: time="2024-08-29 19:08:01.483565161Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6058a9d9-aa9e-4b68-ba6b-20859b30ebba name=/runtime.v1.RuntimeService/Version
	Aug 29 19:08:01 addons-344587 crio[658]: time="2024-08-29 19:08:01.483657107Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6058a9d9-aa9e-4b68-ba6b-20859b30ebba name=/runtime.v1.RuntimeService/Version
	Aug 29 19:08:01 addons-344587 crio[658]: time="2024-08-29 19:08:01.485488308Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7bdf7bde-f02c-4b49-bb31-05ceb78882a3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:08:01 addons-344587 crio[658]: time="2024-08-29 19:08:01.487114701Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958481487022853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:536404,},InodesUsed:&UInt64Value{Value:190,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7bdf7bde-f02c-4b49-bb31-05ceb78882a3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:08:01 addons-344587 crio[658]: time="2024-08-29 19:08:01.488102713Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9dac7233-a552-4e87-a059-6c780b7d8249 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:08:01 addons-344587 crio[658]: time="2024-08-29 19:08:01.488282102Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9dac7233-a552-4e87-a059-6c780b7d8249 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:08:01 addons-344587 crio[658]: time="2024-08-29 19:08:01.489083043Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd5a1f6058ea5299e6bb925b62052c1d7775f01a90be4f4f42a7822ed4cac722,PodSandboxId:3f4cd5577a9f02ad3c7d4804dc99e8cceafe2489d2f850a1c61d968cf22623ae,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1724958459920996417,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-d653ba56-6232-4797-9e26-74b3f827dc87,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 56c85c12-3489-4fdb-99ed-671844801335,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf623e64e19a9fb8fd141adbc0d4a9ead11ce1e2e3a4e36260c0b94bce7b6036,PodSandboxId:9bb29d202e81078fef51804bb9150247946cb42a1c9d50409e9dc471c7f2c0d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:50aa4698fa6262977cff89181b2664b99d8a56dbca847bf62f2ef04854597cf8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65ad0d468eb1c558bf7f4e64e790f586e9eda649ee9f130cd0e835b292bbc5ac,State:CONTAINER_EXITED,CreatedAt:1724958455964314289,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2aee8f3-91eb-4f76-8271-9f9c888a8ccf,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179799c8ef39ffd099f99f96772889dd6e9bc1ed9d8b2578f91f7f9ca2a71f4b,PodSandboxId:f93ac7e6adfbe2776fe4ba9d06095c726766cb26c740a4e3167e4dfdedd974e7,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1724958449957845311,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-d653ba56-6232-4797-9e26-74b3f827dc87,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d8511798-b146-4f1b-bd45-2848f8c4dbad,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c8e7bcbfebef995192df5d64f603c9d08c91b1a799ccb037398d00858d33ba,PodSandboxId:fe391f299e153d6c0972dbdf457c92e1ad975a4e3574b15f1ba653832b929f55,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724957874945557865,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-m8795,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 3289f49d-61c4-4693-818b-5a8e73d95410,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5f92ee66e147ab00440c7ce62942db9b0cb067796c3643b3313a45071577e8e,PodSandboxId:23df866ad30759a145dbb923cb1bfcd112787f0af740ca3dfd174e68f818a8aa,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1724957873467950059,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-sxjv5,io.kubernetes.pod.namespace: ingress-ngin
x,io.kubernetes.pod.uid: e6ea94c8-ff1c-47d8-9e9c-6df136e34608,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:51c4b8b338abf63fdf30f1d9ec94d133498fcba3b2a128daf3e6ddc3ea17561f,PodSandboxId:e680fdadb31c415453ba43dcf470ea96ef746193b88c557213800d8afaad820a,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724957850473483849,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8sjph,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5fdd788a-a4b3-4d16-b6e2-4ccfe541fb0a,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:286779fd868c769dcf2e26f927dd181738b4a91e58bec5a8ee5d995db0b917b9,PodSandboxId:7da21d0210ad6cd0ffa6967f2ac3b65b89c32b1f45e2a0250d44952e911ac498,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Ann
otations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724957849713960242,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xkfkw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 26eb4b3d-75b0-44e5-8385-aece13b52992,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b26ab16b48a3f769f2f106bda6289d008bff63fc49defb022216c5474478ff20,PodSandboxId:eef37c081b69856e96050641ba6815b6193a174ac8634c6289ffc3c2ea88b1fa,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce7
5217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724957841287984594,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-4cmx8,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 4917acbf-a341-45c8-acfc-11340fa9a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54164e21bd9a6b72fe31085cb8cbe33df2bbc8670c01312f597d476f78b5d1c,PodSandboxId:05bebeb94a32a8d55a95ed5362201cbf7e480ead93a4fbbbeeee251aac433a08,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8
s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724957825221794572,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-9tplt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427d61c8-9ff3-4718-9faf-896d20af6cdc,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8b2ab20b747ce79ca67305e50785bc96f6ff948c77d73873122f96ac5595f1,PodSandboxId:f8ec6947dca5bd9d40f
58e822f5ce9f7ebb1f41cea06391721e1aff052c6b893,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1724957818569269326,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2aeaeabc-ac3f-4f8a-88ee-84fe5d623dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:15a0a245e481d700614dae7bfc6d7d4bbe9f074615c88e1373f761abb63e08cc,PodSandboxId:caa68615ea5869f55238af371cf096755b420e33837f56a5391355cc5270a453,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724957800386945011,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e72aaf-6cd6-4740-a9d5-e4a739fed914,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGraceP
eriod: 30,},},&Container{Id:edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1,PodSandboxId:78300c884569d8c916441667591ef1a6cdfdc769a616ccf684cd3aeb6ee173a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724957797914552214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t9nhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01782eed-98db-4768-8ab6-bd429fe58305,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565,PodSandboxId:2f0f516b497dee578fa502336c368d6a64299cb73064fc27e94ba33dcd0c9623,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724957793494579868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lgcxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0be1dddc-793d-471e-aa16-9752951fb72a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef,PodSandboxId:bc4dfc643a4f95574b0c4d436e0af5e463ff66fcda536cf28cb9fd9980b55ae3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724957783507566907,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 433e7620b7da2027dd73dc3bddb2f997,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24,PodSandboxId:f9feaafb78b8d9b6c588088d4158b1b980da872d4034937746daf0b1ac5998c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724957783501034123,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df35d1b995ba56a5ea532995ddbeb880,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459,PodSandboxId:5b90ade16a1ec3848984ac4ccc079dbc33282af9bb7be159d955ed993979dd7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724957783509496209,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3105eec1aeed59eaefbe2b389301917,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24,PodSandboxId:948b38ffb05be778e3450b635de34556972385a53111e037a04d34383f52c377,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724957783431784942,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba40dc401b574376e18cc2969d87be7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9dac7233-a552-4e87-a059-6c780b7d8249 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:08:01 addons-344587 crio[658]: time="2024-08-29 19:08:01.527533969Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0096e77a-b7b2-4915-8137-11e60fc51499 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:08:01 addons-344587 crio[658]: time="2024-08-29 19:08:01.527628198Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0096e77a-b7b2-4915-8137-11e60fc51499 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:08:01 addons-344587 crio[658]: time="2024-08-29 19:08:01.528895590Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7724dc26-32f8-4444-9902-9d9e56d6366e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:08:01 addons-344587 crio[658]: time="2024-08-29 19:08:01.529997662Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958481529969959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:536404,},InodesUsed:&UInt64Value{Value:190,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7724dc26-32f8-4444-9902-9d9e56d6366e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:08:01 addons-344587 crio[658]: time="2024-08-29 19:08:01.530556830Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a199fba-bc74-4a00-baf5-cbbbddc0ecc7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:08:01 addons-344587 crio[658]: time="2024-08-29 19:08:01.530618282Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a199fba-bc74-4a00-baf5-cbbbddc0ecc7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:08:01 addons-344587 crio[658]: time="2024-08-29 19:08:01.531289261Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd5a1f6058ea5299e6bb925b62052c1d7775f01a90be4f4f42a7822ed4cac722,PodSandboxId:3f4cd5577a9f02ad3c7d4804dc99e8cceafe2489d2f850a1c61d968cf22623ae,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1724958459920996417,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-d653ba56-6232-4797-9e26-74b3f827dc87,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 56c85c12-3489-4fdb-99ed-671844801335,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf623e64e19a9fb8fd141adbc0d4a9ead11ce1e2e3a4e36260c0b94bce7b6036,PodSandboxId:9bb29d202e81078fef51804bb9150247946cb42a1c9d50409e9dc471c7f2c0d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:50aa4698fa6262977cff89181b2664b99d8a56dbca847bf62f2ef04854597cf8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65ad0d468eb1c558bf7f4e64e790f586e9eda649ee9f130cd0e835b292bbc5ac,State:CONTAINER_EXITED,CreatedAt:1724958455964314289,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2aee8f3-91eb-4f76-8271-9f9c888a8ccf,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179799c8ef39ffd099f99f96772889dd6e9bc1ed9d8b2578f91f7f9ca2a71f4b,PodSandboxId:f93ac7e6adfbe2776fe4ba9d06095c726766cb26c740a4e3167e4dfdedd974e7,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1724958449957845311,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-d653ba56-6232-4797-9e26-74b3f827dc87,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d8511798-b146-4f1b-bd45-2848f8c4dbad,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c8e7bcbfebef995192df5d64f603c9d08c91b1a799ccb037398d00858d33ba,PodSandboxId:fe391f299e153d6c0972dbdf457c92e1ad975a4e3574b15f1ba653832b929f55,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724957874945557865,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-m8795,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 3289f49d-61c4-4693-818b-5a8e73d95410,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5f92ee66e147ab00440c7ce62942db9b0cb067796c3643b3313a45071577e8e,PodSandboxId:23df866ad30759a145dbb923cb1bfcd112787f0af740ca3dfd174e68f818a8aa,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1724957873467950059,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-sxjv5,io.kubernetes.pod.namespace: ingress-ngin
x,io.kubernetes.pod.uid: e6ea94c8-ff1c-47d8-9e9c-6df136e34608,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:51c4b8b338abf63fdf30f1d9ec94d133498fcba3b2a128daf3e6ddc3ea17561f,PodSandboxId:e680fdadb31c415453ba43dcf470ea96ef746193b88c557213800d8afaad820a,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724957850473483849,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8sjph,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5fdd788a-a4b3-4d16-b6e2-4ccfe541fb0a,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:286779fd868c769dcf2e26f927dd181738b4a91e58bec5a8ee5d995db0b917b9,PodSandboxId:7da21d0210ad6cd0ffa6967f2ac3b65b89c32b1f45e2a0250d44952e911ac498,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Ann
otations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724957849713960242,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xkfkw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 26eb4b3d-75b0-44e5-8385-aece13b52992,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b26ab16b48a3f769f2f106bda6289d008bff63fc49defb022216c5474478ff20,PodSandboxId:eef37c081b69856e96050641ba6815b6193a174ac8634c6289ffc3c2ea88b1fa,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce7
5217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724957841287984594,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-4cmx8,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 4917acbf-a341-45c8-acfc-11340fa9a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54164e21bd9a6b72fe31085cb8cbe33df2bbc8670c01312f597d476f78b5d1c,PodSandboxId:05bebeb94a32a8d55a95ed5362201cbf7e480ead93a4fbbbeeee251aac433a08,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8
s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724957825221794572,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-9tplt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427d61c8-9ff3-4718-9faf-896d20af6cdc,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8b2ab20b747ce79ca67305e50785bc96f6ff948c77d73873122f96ac5595f1,PodSandboxId:f8ec6947dca5bd9d40f
58e822f5ce9f7ebb1f41cea06391721e1aff052c6b893,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1724957818569269326,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2aeaeabc-ac3f-4f8a-88ee-84fe5d623dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:15a0a245e481d700614dae7bfc6d7d4bbe9f074615c88e1373f761abb63e08cc,PodSandboxId:caa68615ea5869f55238af371cf096755b420e33837f56a5391355cc5270a453,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724957800386945011,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e72aaf-6cd6-4740-a9d5-e4a739fed914,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGraceP
eriod: 30,},},&Container{Id:edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1,PodSandboxId:78300c884569d8c916441667591ef1a6cdfdc769a616ccf684cd3aeb6ee173a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724957797914552214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t9nhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01782eed-98db-4768-8ab6-bd429fe58305,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565,PodSandboxId:2f0f516b497dee578fa502336c368d6a64299cb73064fc27e94ba33dcd0c9623,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724957793494579868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lgcxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0be1dddc-793d-471e-aa16-9752951fb72a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef,PodSandboxId:bc4dfc643a4f95574b0c4d436e0af5e463ff66fcda536cf28cb9fd9980b55ae3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724957783507566907,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 433e7620b7da2027dd73dc3bddb2f997,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24,PodSandboxId:f9feaafb78b8d9b6c588088d4158b1b980da872d4034937746daf0b1ac5998c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724957783501034123,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df35d1b995ba56a5ea532995ddbeb880,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459,PodSandboxId:5b90ade16a1ec3848984ac4ccc079dbc33282af9bb7be159d955ed993979dd7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724957783509496209,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3105eec1aeed59eaefbe2b389301917,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24,PodSandboxId:948b38ffb05be778e3450b635de34556972385a53111e037a04d34383f52c377,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724957783431784942,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba40dc401b574376e18cc2969d87be7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0a199fba-bc74-4a00-baf5-cbbbddc0ecc7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bd5a1f6058ea5       a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824                                                             21 seconds ago      Exited              helper-pod                0                   3f4cd5577a9f0       helper-pod-delete-pvc-d653ba56-6232-4797-9e26-74b3f827dc87
	bf623e64e19a9       docker.io/library/busybox@sha256:50aa4698fa6262977cff89181b2664b99d8a56dbca847bf62f2ef04854597cf8                            25 seconds ago      Exited              busybox                   0                   9bb29d202e810       test-local-path
	179799c8ef39f       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                            31 seconds ago      Exited              helper-pod                0                   f93ac7e6adfbe       helper-pod-create-pvc-d653ba56-6232-4797-9e26-74b3f827dc87
	e9c8e7bcbfebe       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 10 minutes ago      Running             gcp-auth                  0                   fe391f299e153       gcp-auth-89d5ffd79-m8795
	a5f92ee66e147       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             10 minutes ago      Running             controller                0                   23df866ad3075       ingress-nginx-controller-bc57996ff-sxjv5
	51c4b8b338abf       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                             10 minutes ago      Exited              patch                     1                   e680fdadb31c4       ingress-nginx-admission-patch-8sjph
	286779fd868c7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   10 minutes ago      Exited              create                    0                   7da21d0210ad6       ingress-nginx-admission-create-xkfkw
	b26ab16b48a3f       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             10 minutes ago      Running             local-path-provisioner    0                   eef37c081b698       local-path-provisioner-86d989889c-4cmx8
	d54164e21bd9a       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        10 minutes ago      Running             metrics-server            0                   05bebeb94a32a       metrics-server-8988944d9-9tplt
	5e8b2ab20b747       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             11 minutes ago      Running             minikube-ingress-dns      0                   f8ec6947dca5b       kube-ingress-dns-minikube
	15a0a245e481d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             11 minutes ago      Running             storage-provisioner       0                   caa68615ea586       storage-provisioner
	edffa46b48365       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             11 minutes ago      Running             coredns                   0                   78300c884569d       coredns-6f6b679f8f-t9nhw
	e6b94afd2073c       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             11 minutes ago      Running             kube-proxy                0                   2f0f516b497de       kube-proxy-lgcxw
	3a9bf9036a456       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             11 minutes ago      Running             etcd                      0                   5b90ade16a1ec       etcd-addons-344587
	46ea401f11d33       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             11 minutes ago      Running             kube-scheduler            0                   bc4dfc643a4f9       kube-scheduler-addons-344587
	79990e8cc7f54       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             11 minutes ago      Running             kube-controller-manager   0                   f9feaafb78b8d       kube-controller-manager-addons-344587
	ca9198782e10b       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             11 minutes ago      Running             kube-apiserver            0                   948b38ffb05be       kube-apiserver-addons-344587
	
	
	==> coredns [edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1] <==
	[INFO] 10.244.0.7:39145 - 42946 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00005377s
	[INFO] 10.244.0.22:42016 - 45991 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000368592s
	[INFO] 10.244.0.22:34827 - 63066 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000120289s
	[INFO] 10.244.0.22:43077 - 9805 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000099235s
	[INFO] 10.244.0.22:42369 - 39774 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000079385s
	[INFO] 10.244.0.22:60024 - 29907 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114811s
	[INFO] 10.244.0.22:45308 - 1618 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000060509s
	[INFO] 10.244.0.22:58816 - 5970 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001084367s
	[INFO] 10.244.0.22:42307 - 58779 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.0009186s
	[INFO] 10.244.0.7:44744 - 64553 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000299877s
	[INFO] 10.244.0.7:44744 - 5421 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000076334s
	[INFO] 10.244.0.7:46191 - 55261 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114362s
	[INFO] 10.244.0.7:46191 - 4319 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000078476s
	[INFO] 10.244.0.7:37623 - 4000 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000088295s
	[INFO] 10.244.0.7:37623 - 54189 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000065651s
	[INFO] 10.244.0.7:37785 - 7471 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000114015s
	[INFO] 10.244.0.7:37785 - 24365 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110504s
	[INFO] 10.244.0.7:60734 - 39177 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000153016s
	[INFO] 10.244.0.7:60734 - 36925 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00004885s
	[INFO] 10.244.0.7:56476 - 49913 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091243s
	[INFO] 10.244.0.7:56476 - 39675 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000038412s
	[INFO] 10.244.0.7:52181 - 34800 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00004683s
	[INFO] 10.244.0.7:52181 - 48114 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000035557s
	[INFO] 10.244.0.7:44052 - 60911 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000130226s
	[INFO] 10.244.0.7:44052 - 20460 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000063327s
	
	
	==> describe nodes <==
	Name:               addons-344587
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-344587
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=addons-344587
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T18_56_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-344587
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:56:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-344587
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:08:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:08:01 +0000   Thu, 29 Aug 2024 18:56:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:08:01 +0000   Thu, 29 Aug 2024 18:56:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:08:01 +0000   Thu, 29 Aug 2024 18:56:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:08:01 +0000   Thu, 29 Aug 2024 18:56:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.172
	  Hostname:    addons-344587
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 260355e6785f4e7bb1e92498cafe0432
	  System UUID:                260355e6-785f-4e7b-b1e9-2498cafe0432
	  Boot ID:                    63059b99-f440-429e-a6ac-c800d57acda3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  gcp-auth                    gcp-auth-89d5ffd79-m8795                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-sxjv5    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         11m
	  kube-system                 coredns-6f6b679f8f-t9nhw                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     11m
	  kube-system                 etcd-addons-344587                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         11m
	  kube-system                 kube-apiserver-addons-344587                250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-addons-344587       200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-lgcxw                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-addons-344587                100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-8988944d9-9tplt              100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         11m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-4cmx8     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 11m   kube-proxy       
	  Normal  Starting                 11m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m   kubelet          Node addons-344587 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m   kubelet          Node addons-344587 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m   kubelet          Node addons-344587 status is now: NodeHasSufficientPID
	  Normal  NodeReady                11m   kubelet          Node addons-344587 status is now: NodeReady
	  Normal  RegisteredNode           11m   node-controller  Node addons-344587 event: Registered Node addons-344587 in Controller
	
	
	==> dmesg <==
	[  +9.697502] kauditd_printk_skb: 5 callbacks suppressed
	[Aug29 18:57] kauditd_printk_skb: 2 callbacks suppressed
	[  +9.094582] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.090259] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.165082] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.848747] kauditd_printk_skb: 52 callbacks suppressed
	[  +5.168731] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.203781] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.023105] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.195417] kauditd_printk_skb: 3 callbacks suppressed
	[Aug29 18:58] kauditd_printk_skb: 49 callbacks suppressed
	[ +39.477849] kauditd_printk_skb: 28 callbacks suppressed
	[Aug29 18:59] kauditd_printk_skb: 2 callbacks suppressed
	[Aug29 19:00] kauditd_printk_skb: 28 callbacks suppressed
	[Aug29 19:03] kauditd_printk_skb: 28 callbacks suppressed
	[Aug29 19:06] kauditd_printk_skb: 28 callbacks suppressed
	[Aug29 19:07] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.472194] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.550952] kauditd_printk_skb: 23 callbacks suppressed
	[  +7.540533] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.552049] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.997754] kauditd_printk_skb: 58 callbacks suppressed
	[  +6.004601] kauditd_printk_skb: 36 callbacks suppressed
	[  +5.537055] kauditd_printk_skb: 12 callbacks suppressed
	[ +11.328650] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459] <==
	{"level":"info","ts":"2024-08-29T18:57:41.313246Z","caller":"traceutil/trace.go:171","msg":"trace[2101895182] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1103; }","duration":"143.80492ms","start":"2024-08-29T18:57:41.169437Z","end":"2024-08-29T18:57:41.313242Z","steps":["trace[2101895182] 'agreement among raft nodes before linearized reading'  (duration: 141.872364ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:57:41.311333Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.784612ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:57:41.313345Z","caller":"traceutil/trace.go:171","msg":"trace[1914043573] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1103; }","duration":"166.776213ms","start":"2024-08-29T18:57:41.146545Z","end":"2024-08-29T18:57:41.313321Z","steps":["trace[1914043573] 'agreement among raft nodes before linearized reading'  (duration: 164.779946ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:57:49.529587Z","caller":"traceutil/trace.go:171","msg":"trace[1099537540] transaction","detail":"{read_only:false; response_revision:1125; number_of_response:1; }","duration":"158.349534ms","start":"2024-08-29T18:57:49.371035Z","end":"2024-08-29T18:57:49.529384Z","steps":["trace[1099537540] 'process raft request'  (duration: 158.202737ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:57:51.987950Z","caller":"traceutil/trace.go:171","msg":"trace[877493432] linearizableReadLoop","detail":"{readStateIndex:1161; appliedIndex:1160; }","duration":"331.031224ms","start":"2024-08-29T18:57:51.656906Z","end":"2024-08-29T18:57:51.987937Z","steps":["trace[877493432] 'read index received'  (duration: 330.742237ms)","trace[877493432] 'applied index is now lower than readState.Index'  (duration: 288.514µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-29T18:57:51.988189Z","caller":"traceutil/trace.go:171","msg":"trace[174649903] transaction","detail":"{read_only:false; response_revision:1128; number_of_response:1; }","duration":"450.012484ms","start":"2024-08-29T18:57:51.538165Z","end":"2024-08-29T18:57:51.988178Z","steps":["trace[174649903] 'process raft request'  (duration: 449.679178ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:57:51.988293Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T18:57:51.538147Z","time spent":"450.082879ms","remote":"127.0.0.1:36120","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1125 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-29T18:57:51.988443Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"331.528183ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:57:51.988481Z","caller":"traceutil/trace.go:171","msg":"trace[1935103251] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1128; }","duration":"331.569979ms","start":"2024-08-29T18:57:51.656903Z","end":"2024-08-29T18:57:51.988473Z","steps":["trace[1935103251] 'agreement among raft nodes before linearized reading'  (duration: 331.509051ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:57:51.988540Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T18:57:51.656873Z","time spent":"331.661655ms","remote":"127.0.0.1:36130","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-08-29T18:57:51.988641Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"318.4247ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:57:51.988771Z","caller":"traceutil/trace.go:171","msg":"trace[676173896] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1128; }","duration":"318.553242ms","start":"2024-08-29T18:57:51.670211Z","end":"2024-08-29T18:57:51.988764Z","steps":["trace[676173896] 'agreement among raft nodes before linearized reading'  (duration: 318.41108ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:57:51.988815Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T18:57:51.670179Z","time spent":"318.622729ms","remote":"127.0.0.1:36130","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-08-29T18:57:51.988972Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.900574ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-8988944d9-9tplt\" ","response":"range_response_count:1 size:4561"}
	{"level":"info","ts":"2024-08-29T18:57:51.989006Z","caller":"traceutil/trace.go:171","msg":"trace[1013009811] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-8988944d9-9tplt; range_end:; response_count:1; response_revision:1128; }","duration":"129.932706ms","start":"2024-08-29T18:57:51.859067Z","end":"2024-08-29T18:57:51.989000Z","steps":["trace[1013009811] 'agreement among raft nodes before linearized reading'  (duration: 129.851129ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:58:02.393148Z","caller":"traceutil/trace.go:171","msg":"trace[728866711] linearizableReadLoop","detail":"{readStateIndex:1216; appliedIndex:1215; }","duration":"245.819709ms","start":"2024-08-29T18:58:02.147307Z","end":"2024-08-29T18:58:02.393126Z","steps":["trace[728866711] 'read index received'  (duration: 245.635911ms)","trace[728866711] 'applied index is now lower than readState.Index'  (duration: 183.347µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-29T18:58:02.393518Z","caller":"traceutil/trace.go:171","msg":"trace[873289508] transaction","detail":"{read_only:false; response_revision:1181; number_of_response:1; }","duration":"341.313574ms","start":"2024-08-29T18:58:02.052193Z","end":"2024-08-29T18:58:02.393507Z","steps":["trace[873289508] 'process raft request'  (duration: 340.79613ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:58:02.393725Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T18:58:02.052166Z","time spent":"341.432789ms","remote":"127.0.0.1:36120","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1176 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-29T18:58:02.393897Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.589501ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:58:02.394190Z","caller":"traceutil/trace.go:171","msg":"trace[902699692] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1181; }","duration":"246.879248ms","start":"2024-08-29T18:58:02.147300Z","end":"2024-08-29T18:58:02.394179Z","steps":["trace[902699692] 'agreement among raft nodes before linearized reading'  (duration: 246.576816ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:58:02.394293Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.664177ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:58:02.395198Z","caller":"traceutil/trace.go:171","msg":"trace[135783897] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1181; }","duration":"151.567211ms","start":"2024-08-29T18:58:02.243617Z","end":"2024-08-29T18:58:02.395184Z","steps":["trace[135783897] 'agreement among raft nodes before linearized reading'  (duration: 150.639245ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T19:06:24.607992Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1539}
	{"level":"info","ts":"2024-08-29T19:06:24.642194Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1539,"took":"33.69634ms","hash":279284985,"current-db-size-bytes":6299648,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":3297280,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-08-29T19:06:24.642264Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":279284985,"revision":1539,"compact-revision":-1}
	
	
	==> gcp-auth [e9c8e7bcbfebef995192df5d64f603c9d08c91b1a799ccb037398d00858d33ba] <==
	2024/08/29 18:58:45 Ready to write response ...
	2024/08/29 18:58:45 Ready to marshal response ...
	2024/08/29 18:58:45 Ready to write response ...
	2024/08/29 18:58:45 Ready to marshal response ...
	2024/08/29 18:58:45 Ready to write response ...
	2024/08/29 19:06:55 Ready to marshal response ...
	2024/08/29 19:06:55 Ready to write response ...
	2024/08/29 19:06:59 Ready to marshal response ...
	2024/08/29 19:06:59 Ready to write response ...
	2024/08/29 19:07:05 Ready to marshal response ...
	2024/08/29 19:07:05 Ready to write response ...
	2024/08/29 19:07:23 Ready to marshal response ...
	2024/08/29 19:07:23 Ready to write response ...
	2024/08/29 19:07:27 Ready to marshal response ...
	2024/08/29 19:07:27 Ready to write response ...
	2024/08/29 19:07:27 Ready to marshal response ...
	2024/08/29 19:07:27 Ready to write response ...
	2024/08/29 19:07:39 Ready to marshal response ...
	2024/08/29 19:07:39 Ready to write response ...
	2024/08/29 19:07:45 Ready to marshal response ...
	2024/08/29 19:07:45 Ready to write response ...
	2024/08/29 19:07:45 Ready to marshal response ...
	2024/08/29 19:07:45 Ready to write response ...
	2024/08/29 19:07:45 Ready to marshal response ...
	2024/08/29 19:07:45 Ready to write response ...
	
	
	==> kernel <==
	 19:08:01 up 12 min,  0 users,  load average: 2.14, 1.04, 0.61
	Linux addons-344587 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24] <==
	E0829 18:58:14.620591       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.126.79:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.126.79:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.126.79:443: i/o timeout" logger="UnhandledError"
	E0829 18:58:14.621003       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0829 18:58:14.652298       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0829 18:58:14.653137       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0829 19:06:55.073349       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0829 19:06:56.105761       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0829 19:07:09.910156       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0829 19:07:38.524327       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 19:07:38.524397       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 19:07:38.583728       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 19:07:38.583790       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 19:07:38.603042       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 19:07:38.603098       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 19:07:38.701294       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 19:07:38.701595       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 19:07:38.708794       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 19:07:38.708838       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0829 19:07:39.703951       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0829 19:07:39.709278       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0829 19:07:39.726588       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0829 19:07:45.314768       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.44.227"}
	E0829 19:07:55.404266       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24] <==
	E0829 19:07:45.371401       1 replica_set.go:560] "Unhandled Error" err="sync \"headlamp/headlamp-57fb76fcdb\" failed with pods \"headlamp-57fb76fcdb-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found" logger="UnhandledError"
	I0829 19:07:45.416572       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="43.213545ms"
	I0829 19:07:45.427039       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="10.240594ms"
	I0829 19:07:45.427184       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="99.828µs"
	I0829 19:07:45.432412       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="59.348µs"
	W0829 19:07:47.657838       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:07:47.657872       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:07:49.091274       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:07:49.091311       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:07:49.181657       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:07:49.181750       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 19:07:52.178477       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="72.701µs"
	I0829 19:07:52.211133       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="16.724286ms"
	I0829 19:07:52.211221       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="39.559µs"
	W0829 19:07:54.654250       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:07:54.654424       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:07:55.522356       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:07:55.522451       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:07:58.644326       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:07:58.644374       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 19:07:58.898530       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="65.331µs"
	I0829 19:08:00.273256       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="10.652µs"
	I0829 19:08:01.397854       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-344587"
	W0829 19:08:01.420394       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:08:01.420435       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 18:56:34.273624       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 18:56:34.316749       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.172"]
	E0829 18:56:34.319592       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 18:56:34.449783       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 18:56:34.449822       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 18:56:34.449854       1 server_linux.go:169] "Using iptables Proxier"
	I0829 18:56:34.453213       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 18:56:34.453462       1 server.go:483] "Version info" version="v1.31.0"
	I0829 18:56:34.453493       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:56:34.455243       1 config.go:197] "Starting service config controller"
	I0829 18:56:34.455281       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 18:56:34.455307       1 config.go:104] "Starting endpoint slice config controller"
	I0829 18:56:34.455311       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 18:56:34.455326       1 config.go:326] "Starting node config controller"
	I0829 18:56:34.455330       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 18:56:34.555742       1 shared_informer.go:320] Caches are synced for node config
	I0829 18:56:34.555769       1 shared_informer.go:320] Caches are synced for service config
	I0829 18:56:34.555789       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef] <==
	W0829 18:56:25.839581       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0829 18:56:25.839612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:25.839779       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0829 18:56:25.839809       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:25.839854       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 18:56:25.839956       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:25.839906       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 18:56:25.840087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:25.844097       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0829 18:56:25.844135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:26.723397       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0829 18:56:26.723437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:26.726980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0829 18:56:26.727065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:26.764739       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0829 18:56:26.764937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:26.775798       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 18:56:26.775919       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:26.923490       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0829 18:56:26.923522       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0829 18:56:26.981178       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0829 18:56:26.981308       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:27.115445       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 18:56:27.115543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0829 18:56:29.434203       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 19:07:59 addons-344587 kubelet[1210]: I0829 19:07:59.299312    1210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5858e660-705d-4449-9523-2c3b39c58625-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "5858e660-705d-4449-9523-2c3b39c58625" (UID: "5858e660-705d-4449-9523-2c3b39c58625"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 29 19:07:59 addons-344587 kubelet[1210]: I0829 19:07:59.316243    1210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5858e660-705d-4449-9523-2c3b39c58625-kube-api-access-vhfls" (OuterVolumeSpecName: "kube-api-access-vhfls") pod "5858e660-705d-4449-9523-2c3b39c58625" (UID: "5858e660-705d-4449-9523-2c3b39c58625"). InnerVolumeSpecName "kube-api-access-vhfls". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 19:07:59 addons-344587 kubelet[1210]: I0829 19:07:59.400097    1210 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5858e660-705d-4449-9523-2c3b39c58625-gcp-creds\") on node \"addons-344587\" DevicePath \"\""
	Aug 29 19:07:59 addons-344587 kubelet[1210]: I0829 19:07:59.400135    1210 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-vhfls\" (UniqueName: \"kubernetes.io/projected/5858e660-705d-4449-9523-2c3b39c58625-kube-api-access-vhfls\") on node \"addons-344587\" DevicePath \"\""
	Aug 29 19:08:00 addons-344587 kubelet[1210]: I0829 19:08:00.224100    1210 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5858e660-705d-4449-9523-2c3b39c58625" path="/var/lib/kubelet/pods/5858e660-705d-4449-9523-2c3b39c58625/volumes"
	Aug 29 19:08:00 addons-344587 kubelet[1210]: I0829 19:08:00.606113    1210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/bf81d777-8ccf-470f-b289-7e3ae7e12be4-gcp-creds\") pod \"bf81d777-8ccf-470f-b289-7e3ae7e12be4\" (UID: \"bf81d777-8ccf-470f-b289-7e3ae7e12be4\") "
	Aug 29 19:08:00 addons-344587 kubelet[1210]: I0829 19:08:00.606182    1210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krdng\" (UniqueName: \"kubernetes.io/projected/bf81d777-8ccf-470f-b289-7e3ae7e12be4-kube-api-access-krdng\") pod \"bf81d777-8ccf-470f-b289-7e3ae7e12be4\" (UID: \"bf81d777-8ccf-470f-b289-7e3ae7e12be4\") "
	Aug 29 19:08:00 addons-344587 kubelet[1210]: I0829 19:08:00.606536    1210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf81d777-8ccf-470f-b289-7e3ae7e12be4-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "bf81d777-8ccf-470f-b289-7e3ae7e12be4" (UID: "bf81d777-8ccf-470f-b289-7e3ae7e12be4"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 29 19:08:00 addons-344587 kubelet[1210]: I0829 19:08:00.612044    1210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf81d777-8ccf-470f-b289-7e3ae7e12be4-kube-api-access-krdng" (OuterVolumeSpecName: "kube-api-access-krdng") pod "bf81d777-8ccf-470f-b289-7e3ae7e12be4" (UID: "bf81d777-8ccf-470f-b289-7e3ae7e12be4"). InnerVolumeSpecName "kube-api-access-krdng". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 19:08:00 addons-344587 kubelet[1210]: I0829 19:08:00.706955    1210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw964\" (UniqueName: \"kubernetes.io/projected/074412f0-2988-4497-a2bb-abd86ddc18ab-kube-api-access-qw964\") pod \"074412f0-2988-4497-a2bb-abd86ddc18ab\" (UID: \"074412f0-2988-4497-a2bb-abd86ddc18ab\") "
	Aug 29 19:08:00 addons-344587 kubelet[1210]: I0829 19:08:00.707038    1210 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/bf81d777-8ccf-470f-b289-7e3ae7e12be4-gcp-creds\") on node \"addons-344587\" DevicePath \"\""
	Aug 29 19:08:00 addons-344587 kubelet[1210]: I0829 19:08:00.707050    1210 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-krdng\" (UniqueName: \"kubernetes.io/projected/bf81d777-8ccf-470f-b289-7e3ae7e12be4-kube-api-access-krdng\") on node \"addons-344587\" DevicePath \"\""
	Aug 29 19:08:00 addons-344587 kubelet[1210]: I0829 19:08:00.712228    1210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/074412f0-2988-4497-a2bb-abd86ddc18ab-kube-api-access-qw964" (OuterVolumeSpecName: "kube-api-access-qw964") pod "074412f0-2988-4497-a2bb-abd86ddc18ab" (UID: "074412f0-2988-4497-a2bb-abd86ddc18ab"). InnerVolumeSpecName "kube-api-access-qw964". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 19:08:00 addons-344587 kubelet[1210]: I0829 19:08:00.807940    1210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-299cn\" (UniqueName: \"kubernetes.io/projected/45f795aa-aca5-41b5-a455-89b285ce9531-kube-api-access-299cn\") pod \"45f795aa-aca5-41b5-a455-89b285ce9531\" (UID: \"45f795aa-aca5-41b5-a455-89b285ce9531\") "
	Aug 29 19:08:00 addons-344587 kubelet[1210]: I0829 19:08:00.808030    1210 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qw964\" (UniqueName: \"kubernetes.io/projected/074412f0-2988-4497-a2bb-abd86ddc18ab-kube-api-access-qw964\") on node \"addons-344587\" DevicePath \"\""
	Aug 29 19:08:00 addons-344587 kubelet[1210]: I0829 19:08:00.811969    1210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45f795aa-aca5-41b5-a455-89b285ce9531-kube-api-access-299cn" (OuterVolumeSpecName: "kube-api-access-299cn") pod "45f795aa-aca5-41b5-a455-89b285ce9531" (UID: "45f795aa-aca5-41b5-a455-89b285ce9531"). InnerVolumeSpecName "kube-api-access-299cn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 19:08:00 addons-344587 kubelet[1210]: I0829 19:08:00.909277    1210 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-299cn\" (UniqueName: \"kubernetes.io/projected/45f795aa-aca5-41b5-a455-89b285ce9531-kube-api-access-299cn\") on node \"addons-344587\" DevicePath \"\""
	Aug 29 19:08:01 addons-344587 kubelet[1210]: I0829 19:08:01.222500    1210 scope.go:117] "RemoveContainer" containerID="8fabb2435690616b9b427709618315f5226242c377e7b1353b637dfec156cfc0"
	Aug 29 19:08:01 addons-344587 kubelet[1210]: I0829 19:08:01.280157    1210 scope.go:117] "RemoveContainer" containerID="8fabb2435690616b9b427709618315f5226242c377e7b1353b637dfec156cfc0"
	Aug 29 19:08:01 addons-344587 kubelet[1210]: E0829 19:08:01.281280    1210 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fabb2435690616b9b427709618315f5226242c377e7b1353b637dfec156cfc0\": container with ID starting with 8fabb2435690616b9b427709618315f5226242c377e7b1353b637dfec156cfc0 not found: ID does not exist" containerID="8fabb2435690616b9b427709618315f5226242c377e7b1353b637dfec156cfc0"
	Aug 29 19:08:01 addons-344587 kubelet[1210]: I0829 19:08:01.281313    1210 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fabb2435690616b9b427709618315f5226242c377e7b1353b637dfec156cfc0"} err="failed to get container status \"8fabb2435690616b9b427709618315f5226242c377e7b1353b637dfec156cfc0\": rpc error: code = NotFound desc = could not find container \"8fabb2435690616b9b427709618315f5226242c377e7b1353b637dfec156cfc0\": container with ID starting with 8fabb2435690616b9b427709618315f5226242c377e7b1353b637dfec156cfc0 not found: ID does not exist"
	Aug 29 19:08:01 addons-344587 kubelet[1210]: I0829 19:08:01.281336    1210 scope.go:117] "RemoveContainer" containerID="a73e7afd2423fa4c8e14303e620a94e6cf90ad2d3aa01c34ecd038e34b3ed24a"
	Aug 29 19:08:01 addons-344587 kubelet[1210]: I0829 19:08:01.326920    1210 scope.go:117] "RemoveContainer" containerID="a73e7afd2423fa4c8e14303e620a94e6cf90ad2d3aa01c34ecd038e34b3ed24a"
	Aug 29 19:08:01 addons-344587 kubelet[1210]: E0829 19:08:01.327767    1210 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a73e7afd2423fa4c8e14303e620a94e6cf90ad2d3aa01c34ecd038e34b3ed24a\": container with ID starting with a73e7afd2423fa4c8e14303e620a94e6cf90ad2d3aa01c34ecd038e34b3ed24a not found: ID does not exist" containerID="a73e7afd2423fa4c8e14303e620a94e6cf90ad2d3aa01c34ecd038e34b3ed24a"
	Aug 29 19:08:01 addons-344587 kubelet[1210]: I0829 19:08:01.327862    1210 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a73e7afd2423fa4c8e14303e620a94e6cf90ad2d3aa01c34ecd038e34b3ed24a"} err="failed to get container status \"a73e7afd2423fa4c8e14303e620a94e6cf90ad2d3aa01c34ecd038e34b3ed24a\": rpc error: code = NotFound desc = could not find container \"a73e7afd2423fa4c8e14303e620a94e6cf90ad2d3aa01c34ecd038e34b3ed24a\": container with ID starting with a73e7afd2423fa4c8e14303e620a94e6cf90ad2d3aa01c34ecd038e34b3ed24a not found: ID does not exist"
	
	
	==> storage-provisioner [15a0a245e481d700614dae7bfc6d7d4bbe9f074615c88e1373f761abb63e08cc] <==
	I0829 18:56:42.218258       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 18:56:42.884144       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 18:56:42.967214       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 18:56:43.042082       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 18:56:43.043164       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"aa6f651c-dee9-4c5c-bb08-efe5aaec9d98", APIVersion:"v1", ResourceVersion:"717", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-344587_b654bfc4-1e7a-4b37-abe2-9c326f1dacc1 became leader
	I0829 18:56:43.043203       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-344587_b654bfc4-1e7a-4b37-abe2-9c326f1dacc1!
	I0829 18:56:43.212968       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-344587_b654bfc4-1e7a-4b37-abe2-9c326f1dacc1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-344587 -n addons-344587
helpers_test.go:261: (dbg) Run:  kubectl --context addons-344587 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-xkfkw ingress-nginx-admission-patch-8sjph
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-344587 describe pod busybox ingress-nginx-admission-create-xkfkw ingress-nginx-admission-patch-8sjph
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-344587 describe pod busybox ingress-nginx-admission-create-xkfkw ingress-nginx-admission-patch-8sjph: exit status 1 (67.144094ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-344587/192.168.39.172
	Start Time:       Thu, 29 Aug 2024 18:58:45 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bb56t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bb56t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m17s                   default-scheduler  Successfully assigned default/busybox to addons-344587
	  Normal   Pulling    7m59s (x4 over 9m16s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m59s (x4 over 9m16s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m59s (x4 over 9m16s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m31s (x6 over 9m16s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m13s (x20 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-xkfkw" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-8sjph" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-344587 describe pod busybox ingress-nginx-admission-create-xkfkw ingress-nginx-admission-patch-8sjph: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.22s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (153.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-344587 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-344587 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-344587 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [315329e4-6bc2-4164-a37f-2d9d7857eba1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [315329e4-6bc2-4164-a37f-2d9d7857eba1] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.007290003s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-344587 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-344587 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.037008929s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-344587 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-344587 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.172
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-344587 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-344587 addons disable ingress-dns --alsologtostderr -v=1: (1.154919509s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-344587 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-344587 addons disable ingress --alsologtostderr -v=1: (7.756103334s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-344587 -n addons-344587
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-344587 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-344587 logs -n 25: (1.281740962s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-273933                                                                     | download-only-273933 | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC | 29 Aug 24 18:55 UTC |
	| delete  | -p download-only-800504                                                                     | download-only-800504 | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC | 29 Aug 24 18:55 UTC |
	| delete  | -p download-only-273933                                                                     | download-only-273933 | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC | 29 Aug 24 18:55 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-124601 | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC |                     |
	|         | binary-mirror-124601                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41153                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-124601                                                                     | binary-mirror-124601 | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC | 29 Aug 24 18:55 UTC |
	| addons  | enable dashboard -p                                                                         | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC |                     |
	|         | addons-344587                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC |                     |
	|         | addons-344587                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-344587 --wait=true                                                                | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC | 29 Aug 24 18:58 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:06 UTC | 29 Aug 24 19:07 UTC |
	|         | addons-344587                                                                               |                      |         |         |                     |                     |
	| addons  | addons-344587 addons disable                                                                | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:07 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-344587 addons disable                                                                | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:07 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:07 UTC |
	|         | -p addons-344587                                                                            |                      |         |         |                     |                     |
	| addons  | addons-344587 addons                                                                        | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:07 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-344587 addons                                                                        | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:07 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-344587 ssh cat                                                                       | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:07 UTC |
	|         | /opt/local-path-provisioner/pvc-d653ba56-6232-4797-9e26-74b3f827dc87_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-344587 addons disable                                                                | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:08 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:07 UTC |
	|         | addons-344587                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:07 UTC |
	|         | -p addons-344587                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-344587 addons disable                                                                | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:08 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-344587 ip                                                                            | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:07 UTC |
	| addons  | addons-344587 addons disable                                                                | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:08 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-344587 ssh curl -s                                                                   | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:08 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-344587 ip                                                                            | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:10 UTC | 29 Aug 24 19:10 UTC |
	| addons  | addons-344587 addons disable                                                                | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:10 UTC | 29 Aug 24 19:10 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-344587 addons disable                                                                | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:10 UTC | 29 Aug 24 19:10 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:55:50
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:55:50.381982   18990 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:55:50.382091   18990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:55:50.382099   18990 out.go:358] Setting ErrFile to fd 2...
	I0829 18:55:50.382103   18990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:55:50.382261   18990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 18:55:50.382847   18990 out.go:352] Setting JSON to false
	I0829 18:55:50.383602   18990 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2297,"bootTime":1724955453,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:55:50.383652   18990 start.go:139] virtualization: kvm guest
	I0829 18:55:50.385939   18990 out.go:177] * [addons-344587] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:55:50.387376   18990 out.go:177]   - MINIKUBE_LOCATION=19530
	I0829 18:55:50.387387   18990 notify.go:220] Checking for updates...
	I0829 18:55:50.389960   18990 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:55:50.391173   18990 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 18:55:50.392418   18990 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 18:55:50.393615   18990 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:55:50.394904   18990 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:55:50.396433   18990 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:55:50.428475   18990 out.go:177] * Using the kvm2 driver based on user configuration
	I0829 18:55:50.429854   18990 start.go:297] selected driver: kvm2
	I0829 18:55:50.429864   18990 start.go:901] validating driver "kvm2" against <nil>
	I0829 18:55:50.429873   18990 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:55:50.430509   18990 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:55:50.430589   18990 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19530-11185/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 18:55:50.444888   18990 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 18:55:50.444932   18990 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:55:50.445130   18990 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:55:50.445196   18990 cni.go:84] Creating CNI manager for ""
	I0829 18:55:50.445212   18990 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 18:55:50.445222   18990 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 18:55:50.445293   18990 start.go:340] cluster config:
	{Name:addons-344587 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-344587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:55:50.445402   18990 iso.go:125] acquiring lock: {Name:mk1c9d3ac7f423dd4657884e37bdf4359f6328d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:55:50.447108   18990 out.go:177] * Starting "addons-344587" primary control-plane node in "addons-344587" cluster
	I0829 18:55:50.448355   18990 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:55:50.448396   18990 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 18:55:50.448405   18990 cache.go:56] Caching tarball of preloaded images
	I0829 18:55:50.448475   18990 preload.go:172] Found /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 18:55:50.448487   18990 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 18:55:50.448826   18990 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/config.json ...
	I0829 18:55:50.448852   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/config.json: {Name:mkbebd6be4c06f31a480a2816ef4d17f65638f42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:55:50.448990   18990 start.go:360] acquireMachinesLock for addons-344587: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 18:55:50.449049   18990 start.go:364] duration metric: took 44.089µs to acquireMachinesLock for "addons-344587"
	I0829 18:55:50.449073   18990 start.go:93] Provisioning new machine with config: &{Name:addons-344587 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-344587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:55:50.449138   18990 start.go:125] createHost starting for "" (driver="kvm2")
	I0829 18:55:50.450643   18990 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0829 18:55:50.450772   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:55:50.450820   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:55:50.464579   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36837
	I0829 18:55:50.464968   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:55:50.465424   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:55:50.465444   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:55:50.465798   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:55:50.465987   18990 main.go:141] libmachine: (addons-344587) Calling .GetMachineName
	I0829 18:55:50.466159   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:55:50.466300   18990 start.go:159] libmachine.API.Create for "addons-344587" (driver="kvm2")
	I0829 18:55:50.466328   18990 client.go:168] LocalClient.Create starting
	I0829 18:55:50.466375   18990 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem
	I0829 18:55:50.795899   18990 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem
	I0829 18:55:50.842743   18990 main.go:141] libmachine: Running pre-create checks...
	I0829 18:55:50.842764   18990 main.go:141] libmachine: (addons-344587) Calling .PreCreateCheck
	I0829 18:55:50.843261   18990 main.go:141] libmachine: (addons-344587) Calling .GetConfigRaw
	I0829 18:55:50.843665   18990 main.go:141] libmachine: Creating machine...
	I0829 18:55:50.843678   18990 main.go:141] libmachine: (addons-344587) Calling .Create
	I0829 18:55:50.843802   18990 main.go:141] libmachine: (addons-344587) Creating KVM machine...
	I0829 18:55:50.844841   18990 main.go:141] libmachine: (addons-344587) DBG | found existing default KVM network
	I0829 18:55:50.845576   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:50.845449   19012 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0829 18:55:50.845599   18990 main.go:141] libmachine: (addons-344587) DBG | created network xml: 
	I0829 18:55:50.845612   18990 main.go:141] libmachine: (addons-344587) DBG | <network>
	I0829 18:55:50.845626   18990 main.go:141] libmachine: (addons-344587) DBG |   <name>mk-addons-344587</name>
	I0829 18:55:50.845668   18990 main.go:141] libmachine: (addons-344587) DBG |   <dns enable='no'/>
	I0829 18:55:50.845695   18990 main.go:141] libmachine: (addons-344587) DBG |   
	I0829 18:55:50.845709   18990 main.go:141] libmachine: (addons-344587) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0829 18:55:50.845719   18990 main.go:141] libmachine: (addons-344587) DBG |     <dhcp>
	I0829 18:55:50.845731   18990 main.go:141] libmachine: (addons-344587) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0829 18:55:50.845742   18990 main.go:141] libmachine: (addons-344587) DBG |     </dhcp>
	I0829 18:55:50.845753   18990 main.go:141] libmachine: (addons-344587) DBG |   </ip>
	I0829 18:55:50.845762   18990 main.go:141] libmachine: (addons-344587) DBG |   
	I0829 18:55:50.845771   18990 main.go:141] libmachine: (addons-344587) DBG | </network>
	I0829 18:55:50.845781   18990 main.go:141] libmachine: (addons-344587) DBG | 
	I0829 18:55:50.850798   18990 main.go:141] libmachine: (addons-344587) DBG | trying to create private KVM network mk-addons-344587 192.168.39.0/24...
	I0829 18:55:50.914004   18990 main.go:141] libmachine: (addons-344587) DBG | private KVM network mk-addons-344587 192.168.39.0/24 created
	I0829 18:55:50.914032   18990 main.go:141] libmachine: (addons-344587) Setting up store path in /home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587 ...
	I0829 18:55:50.914058   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:50.913976   19012 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 18:55:50.914082   18990 main.go:141] libmachine: (addons-344587) Building disk image from file:///home/jenkins/minikube-integration/19530-11185/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso
	I0829 18:55:50.914101   18990 main.go:141] libmachine: (addons-344587) Downloading /home/jenkins/minikube-integration/19530-11185/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19530-11185/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso...
	I0829 18:55:51.165621   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:51.165525   19012 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa...
	I0829 18:55:51.361310   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:51.361174   19012 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/addons-344587.rawdisk...
	I0829 18:55:51.361334   18990 main.go:141] libmachine: (addons-344587) DBG | Writing magic tar header
	I0829 18:55:51.361345   18990 main.go:141] libmachine: (addons-344587) DBG | Writing SSH key tar header
	I0829 18:55:51.361360   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:51.361285   19012 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587 ...
	I0829 18:55:51.361376   18990 main.go:141] libmachine: (addons-344587) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587
	I0829 18:55:51.361413   18990 main.go:141] libmachine: (addons-344587) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587 (perms=drwx------)
	I0829 18:55:51.361435   18990 main.go:141] libmachine: (addons-344587) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube/machines (perms=drwxr-xr-x)
	I0829 18:55:51.361442   18990 main.go:141] libmachine: (addons-344587) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube (perms=drwxr-xr-x)
	I0829 18:55:51.361449   18990 main.go:141] libmachine: (addons-344587) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube/machines
	I0829 18:55:51.361457   18990 main.go:141] libmachine: (addons-344587) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 18:55:51.361462   18990 main.go:141] libmachine: (addons-344587) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185
	I0829 18:55:51.361470   18990 main.go:141] libmachine: (addons-344587) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0829 18:55:51.361480   18990 main.go:141] libmachine: (addons-344587) DBG | Checking permissions on dir: /home/jenkins
	I0829 18:55:51.361487   18990 main.go:141] libmachine: (addons-344587) DBG | Checking permissions on dir: /home
	I0829 18:55:51.361492   18990 main.go:141] libmachine: (addons-344587) DBG | Skipping /home - not owner
	I0829 18:55:51.361519   18990 main.go:141] libmachine: (addons-344587) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185 (perms=drwxrwxr-x)
	I0829 18:55:51.361543   18990 main.go:141] libmachine: (addons-344587) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0829 18:55:51.361552   18990 main.go:141] libmachine: (addons-344587) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0829 18:55:51.361557   18990 main.go:141] libmachine: (addons-344587) Creating domain...
	I0829 18:55:51.362695   18990 main.go:141] libmachine: (addons-344587) define libvirt domain using xml: 
	I0829 18:55:51.362720   18990 main.go:141] libmachine: (addons-344587) <domain type='kvm'>
	I0829 18:55:51.362728   18990 main.go:141] libmachine: (addons-344587)   <name>addons-344587</name>
	I0829 18:55:51.362733   18990 main.go:141] libmachine: (addons-344587)   <memory unit='MiB'>4000</memory>
	I0829 18:55:51.362739   18990 main.go:141] libmachine: (addons-344587)   <vcpu>2</vcpu>
	I0829 18:55:51.362743   18990 main.go:141] libmachine: (addons-344587)   <features>
	I0829 18:55:51.362748   18990 main.go:141] libmachine: (addons-344587)     <acpi/>
	I0829 18:55:51.362755   18990 main.go:141] libmachine: (addons-344587)     <apic/>
	I0829 18:55:51.362760   18990 main.go:141] libmachine: (addons-344587)     <pae/>
	I0829 18:55:51.362764   18990 main.go:141] libmachine: (addons-344587)     
	I0829 18:55:51.362770   18990 main.go:141] libmachine: (addons-344587)   </features>
	I0829 18:55:51.362775   18990 main.go:141] libmachine: (addons-344587)   <cpu mode='host-passthrough'>
	I0829 18:55:51.362780   18990 main.go:141] libmachine: (addons-344587)   
	I0829 18:55:51.362786   18990 main.go:141] libmachine: (addons-344587)   </cpu>
	I0829 18:55:51.362794   18990 main.go:141] libmachine: (addons-344587)   <os>
	I0829 18:55:51.362799   18990 main.go:141] libmachine: (addons-344587)     <type>hvm</type>
	I0829 18:55:51.362807   18990 main.go:141] libmachine: (addons-344587)     <boot dev='cdrom'/>
	I0829 18:55:51.362812   18990 main.go:141] libmachine: (addons-344587)     <boot dev='hd'/>
	I0829 18:55:51.362820   18990 main.go:141] libmachine: (addons-344587)     <bootmenu enable='no'/>
	I0829 18:55:51.362826   18990 main.go:141] libmachine: (addons-344587)   </os>
	I0829 18:55:51.362855   18990 main.go:141] libmachine: (addons-344587)   <devices>
	I0829 18:55:51.362878   18990 main.go:141] libmachine: (addons-344587)     <disk type='file' device='cdrom'>
	I0829 18:55:51.362903   18990 main.go:141] libmachine: (addons-344587)       <source file='/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/boot2docker.iso'/>
	I0829 18:55:51.362920   18990 main.go:141] libmachine: (addons-344587)       <target dev='hdc' bus='scsi'/>
	I0829 18:55:51.362957   18990 main.go:141] libmachine: (addons-344587)       <readonly/>
	I0829 18:55:51.362969   18990 main.go:141] libmachine: (addons-344587)     </disk>
	I0829 18:55:51.362980   18990 main.go:141] libmachine: (addons-344587)     <disk type='file' device='disk'>
	I0829 18:55:51.362995   18990 main.go:141] libmachine: (addons-344587)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0829 18:55:51.363011   18990 main.go:141] libmachine: (addons-344587)       <source file='/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/addons-344587.rawdisk'/>
	I0829 18:55:51.363023   18990 main.go:141] libmachine: (addons-344587)       <target dev='hda' bus='virtio'/>
	I0829 18:55:51.363036   18990 main.go:141] libmachine: (addons-344587)     </disk>
	I0829 18:55:51.363048   18990 main.go:141] libmachine: (addons-344587)     <interface type='network'>
	I0829 18:55:51.363068   18990 main.go:141] libmachine: (addons-344587)       <source network='mk-addons-344587'/>
	I0829 18:55:51.363090   18990 main.go:141] libmachine: (addons-344587)       <model type='virtio'/>
	I0829 18:55:51.363098   18990 main.go:141] libmachine: (addons-344587)     </interface>
	I0829 18:55:51.363103   18990 main.go:141] libmachine: (addons-344587)     <interface type='network'>
	I0829 18:55:51.363119   18990 main.go:141] libmachine: (addons-344587)       <source network='default'/>
	I0829 18:55:51.363133   18990 main.go:141] libmachine: (addons-344587)       <model type='virtio'/>
	I0829 18:55:51.363144   18990 main.go:141] libmachine: (addons-344587)     </interface>
	I0829 18:55:51.363151   18990 main.go:141] libmachine: (addons-344587)     <serial type='pty'>
	I0829 18:55:51.363157   18990 main.go:141] libmachine: (addons-344587)       <target port='0'/>
	I0829 18:55:51.363165   18990 main.go:141] libmachine: (addons-344587)     </serial>
	I0829 18:55:51.363192   18990 main.go:141] libmachine: (addons-344587)     <console type='pty'>
	I0829 18:55:51.363222   18990 main.go:141] libmachine: (addons-344587)       <target type='serial' port='0'/>
	I0829 18:55:51.363237   18990 main.go:141] libmachine: (addons-344587)     </console>
	I0829 18:55:51.363245   18990 main.go:141] libmachine: (addons-344587)     <rng model='virtio'>
	I0829 18:55:51.363258   18990 main.go:141] libmachine: (addons-344587)       <backend model='random'>/dev/random</backend>
	I0829 18:55:51.363267   18990 main.go:141] libmachine: (addons-344587)     </rng>
	I0829 18:55:51.363279   18990 main.go:141] libmachine: (addons-344587)     
	I0829 18:55:51.363289   18990 main.go:141] libmachine: (addons-344587)     
	I0829 18:55:51.363301   18990 main.go:141] libmachine: (addons-344587)   </devices>
	I0829 18:55:51.363319   18990 main.go:141] libmachine: (addons-344587) </domain>
	I0829 18:55:51.363334   18990 main.go:141] libmachine: (addons-344587) 
	I0829 18:55:51.369959   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:1d:b5:8e in network default
	I0829 18:55:51.370417   18990 main.go:141] libmachine: (addons-344587) Ensuring networks are active...
	I0829 18:55:51.370435   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:51.371026   18990 main.go:141] libmachine: (addons-344587) Ensuring network default is active
	I0829 18:55:51.371287   18990 main.go:141] libmachine: (addons-344587) Ensuring network mk-addons-344587 is active
	I0829 18:55:51.372284   18990 main.go:141] libmachine: (addons-344587) Getting domain xml...
	I0829 18:55:51.372893   18990 main.go:141] libmachine: (addons-344587) Creating domain...
	I0829 18:55:52.746079   18990 main.go:141] libmachine: (addons-344587) Waiting to get IP...
	I0829 18:55:52.746802   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:52.747139   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:52.747169   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:52.747092   19012 retry.go:31] will retry after 281.547466ms: waiting for machine to come up
	I0829 18:55:53.030572   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:53.031020   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:53.031046   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:53.030987   19012 retry.go:31] will retry after 320.244389ms: waiting for machine to come up
	I0829 18:55:53.352319   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:53.352723   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:53.352751   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:53.352677   19012 retry.go:31] will retry after 475.897243ms: waiting for machine to come up
	I0829 18:55:53.830271   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:53.830799   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:53.830826   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:53.830758   19012 retry.go:31] will retry after 415.393917ms: waiting for machine to come up
	I0829 18:55:54.247242   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:54.247686   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:54.247722   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:54.247646   19012 retry.go:31] will retry after 663.283802ms: waiting for machine to come up
	I0829 18:55:54.912468   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:54.912891   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:54.912917   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:54.912861   19012 retry.go:31] will retry after 823.255008ms: waiting for machine to come up
	I0829 18:55:55.737292   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:55.737672   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:55.737702   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:55.737654   19012 retry.go:31] will retry after 924.09927ms: waiting for machine to come up
	I0829 18:55:56.663683   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:56.664092   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:56.664117   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:56.664046   19012 retry.go:31] will retry after 1.475206367s: waiting for machine to come up
	I0829 18:55:58.141547   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:58.142031   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:58.142052   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:58.142003   19012 retry.go:31] will retry after 1.352228994s: waiting for machine to come up
	I0829 18:55:59.496409   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:59.496870   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:59.496896   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:59.496821   19012 retry.go:31] will retry after 2.187164775s: waiting for machine to come up
	I0829 18:56:01.685976   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:01.686371   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:56:01.686393   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:56:01.686346   19012 retry.go:31] will retry after 2.735265922s: waiting for machine to come up
	I0829 18:56:04.422715   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:04.423157   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:56:04.423172   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:56:04.423133   19012 retry.go:31] will retry after 2.867752561s: waiting for machine to come up
	I0829 18:56:07.292218   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:07.292615   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:56:07.292641   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:56:07.292570   19012 retry.go:31] will retry after 4.389513147s: waiting for machine to come up
	I0829 18:56:11.683601   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:11.684092   18990 main.go:141] libmachine: (addons-344587) Found IP for machine: 192.168.39.172
	I0829 18:56:11.684118   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has current primary IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:11.684127   18990 main.go:141] libmachine: (addons-344587) Reserving static IP address...
	I0829 18:56:11.684501   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find host DHCP lease matching {name: "addons-344587", mac: "52:54:00:03:42:33", ip: "192.168.39.172"} in network mk-addons-344587
	I0829 18:56:11.822664   18990 main.go:141] libmachine: (addons-344587) DBG | Getting to WaitForSSH function...
	I0829 18:56:11.822759   18990 main.go:141] libmachine: (addons-344587) Reserved static IP address: 192.168.39.172
	I0829 18:56:11.822780   18990 main.go:141] libmachine: (addons-344587) Waiting for SSH to be available...
	I0829 18:56:11.825035   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:11.825430   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:minikube Clientid:01:52:54:00:03:42:33}
	I0829 18:56:11.825460   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:11.825623   18990 main.go:141] libmachine: (addons-344587) DBG | Using SSH client type: external
	I0829 18:56:11.825652   18990 main.go:141] libmachine: (addons-344587) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa (-rw-------)
	I0829 18:56:11.825693   18990 main.go:141] libmachine: (addons-344587) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.172 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 18:56:11.825713   18990 main.go:141] libmachine: (addons-344587) DBG | About to run SSH command:
	I0829 18:56:11.825728   18990 main.go:141] libmachine: (addons-344587) DBG | exit 0
	I0829 18:56:11.958392   18990 main.go:141] libmachine: (addons-344587) DBG | SSH cmd err, output: <nil>: 
	I0829 18:56:11.958658   18990 main.go:141] libmachine: (addons-344587) KVM machine creation complete!
	I0829 18:56:11.958964   18990 main.go:141] libmachine: (addons-344587) Calling .GetConfigRaw
	I0829 18:56:11.979533   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:11.979843   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:11.980024   18990 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0829 18:56:11.980042   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:11.981444   18990 main.go:141] libmachine: Detecting operating system of created instance...
	I0829 18:56:11.981459   18990 main.go:141] libmachine: Waiting for SSH to be available...
	I0829 18:56:11.981466   18990 main.go:141] libmachine: Getting to WaitForSSH function...
	I0829 18:56:11.981474   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:11.983980   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:11.984292   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:11.984313   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:11.984444   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:11.984613   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:11.984770   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:11.984916   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:11.985127   18990 main.go:141] libmachine: Using SSH client type: native
	I0829 18:56:11.985342   18990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0829 18:56:11.985357   18990 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0829 18:56:12.089723   18990 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:56:12.089742   18990 main.go:141] libmachine: Detecting the provisioner...
	I0829 18:56:12.089749   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:12.092754   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.093106   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:12.093131   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.093284   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:12.093486   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:12.093657   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:12.093787   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:12.093942   18990 main.go:141] libmachine: Using SSH client type: native
	I0829 18:56:12.094126   18990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0829 18:56:12.094139   18990 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0829 18:56:12.199320   18990 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0829 18:56:12.199392   18990 main.go:141] libmachine: found compatible host: buildroot
	I0829 18:56:12.199401   18990 main.go:141] libmachine: Provisioning with buildroot...
	I0829 18:56:12.199410   18990 main.go:141] libmachine: (addons-344587) Calling .GetMachineName
	I0829 18:56:12.199644   18990 buildroot.go:166] provisioning hostname "addons-344587"
	I0829 18:56:12.199675   18990 main.go:141] libmachine: (addons-344587) Calling .GetMachineName
	I0829 18:56:12.199823   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:12.202332   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.202658   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:12.202684   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.202849   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:12.203092   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:12.203227   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:12.203390   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:12.203529   18990 main.go:141] libmachine: Using SSH client type: native
	I0829 18:56:12.203692   18990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0829 18:56:12.203705   18990 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-344587 && echo "addons-344587" | sudo tee /etc/hostname
	I0829 18:56:12.320497   18990 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-344587
	
	I0829 18:56:12.320526   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:12.323075   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.323387   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:12.323411   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.323589   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:12.323786   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:12.323975   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:12.324113   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:12.324283   18990 main.go:141] libmachine: Using SSH client type: native
	I0829 18:56:12.324480   18990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0829 18:56:12.324504   18990 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-344587' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-344587/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-344587' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 18:56:12.439927   18990 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:56:12.439966   18990 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 18:56:12.440002   18990 buildroot.go:174] setting up certificates
	I0829 18:56:12.440016   18990 provision.go:84] configureAuth start
	I0829 18:56:12.440030   18990 main.go:141] libmachine: (addons-344587) Calling .GetMachineName
	I0829 18:56:12.440343   18990 main.go:141] libmachine: (addons-344587) Calling .GetIP
	I0829 18:56:12.442796   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.443174   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:12.443192   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.443334   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:12.445622   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.446147   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:12.446173   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.446336   18990 provision.go:143] copyHostCerts
	I0829 18:56:12.446417   18990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 18:56:12.446555   18990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 18:56:12.446655   18990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 18:56:12.446738   18990 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.addons-344587 san=[127.0.0.1 192.168.39.172 addons-344587 localhost minikube]
	I0829 18:56:12.656811   18990 provision.go:177] copyRemoteCerts
	I0829 18:56:12.656860   18990 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 18:56:12.656881   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:12.659602   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.659950   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:12.659986   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.660127   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:12.660284   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:12.660452   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:12.660569   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:12.740979   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 18:56:12.764765   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 18:56:12.789751   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0829 18:56:12.814999   18990 provision.go:87] duration metric: took 374.97013ms to configureAuth
	I0829 18:56:12.815029   18990 buildroot.go:189] setting minikube options for container-runtime
	I0829 18:56:12.815219   18990 config.go:182] Loaded profile config "addons-344587": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:56:12.815307   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:12.817789   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.818126   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:12.818155   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.818312   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:12.818507   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:12.818700   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:12.818849   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:12.819046   18990 main.go:141] libmachine: Using SSH client type: native
	I0829 18:56:12.819234   18990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0829 18:56:12.819254   18990 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 18:56:13.034009   18990 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 18:56:13.034032   18990 main.go:141] libmachine: Checking connection to Docker...
	I0829 18:56:13.034040   18990 main.go:141] libmachine: (addons-344587) Calling .GetURL
	I0829 18:56:13.035499   18990 main.go:141] libmachine: (addons-344587) DBG | Using libvirt version 6000000
	I0829 18:56:13.037684   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.038017   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:13.038048   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.038196   18990 main.go:141] libmachine: Docker is up and running!
	I0829 18:56:13.038210   18990 main.go:141] libmachine: Reticulating splines...
	I0829 18:56:13.038219   18990 client.go:171] duration metric: took 22.571881082s to LocalClient.Create
	I0829 18:56:13.038239   18990 start.go:167] duration metric: took 22.5719417s to libmachine.API.Create "addons-344587"
	I0829 18:56:13.038262   18990 start.go:293] postStartSetup for "addons-344587" (driver="kvm2")
	I0829 18:56:13.038277   18990 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 18:56:13.038298   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:13.038570   18990 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 18:56:13.038589   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:13.040755   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.041066   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:13.041089   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.041223   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:13.041426   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:13.041595   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:13.041734   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:13.124537   18990 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 18:56:13.129327   18990 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 18:56:13.129348   18990 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 18:56:13.129400   18990 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 18:56:13.129423   18990 start.go:296] duration metric: took 91.15174ms for postStartSetup
	I0829 18:56:13.129451   18990 main.go:141] libmachine: (addons-344587) Calling .GetConfigRaw
	I0829 18:56:13.130128   18990 main.go:141] libmachine: (addons-344587) Calling .GetIP
	I0829 18:56:13.132903   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.133252   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:13.133280   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.133484   18990 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/config.json ...
	I0829 18:56:13.133661   18990 start.go:128] duration metric: took 22.68451279s to createHost
	I0829 18:56:13.133686   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:13.135794   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.136096   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:13.136138   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.136227   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:13.136392   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:13.136531   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:13.136674   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:13.136811   18990 main.go:141] libmachine: Using SSH client type: native
	I0829 18:56:13.136983   18990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0829 18:56:13.136995   18990 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 18:56:13.239138   18990 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724957773.212403643
	
	I0829 18:56:13.239157   18990 fix.go:216] guest clock: 1724957773.212403643
	I0829 18:56:13.239164   18990 fix.go:229] Guest: 2024-08-29 18:56:13.212403643 +0000 UTC Remote: 2024-08-29 18:56:13.133675132 +0000 UTC m=+22.790316868 (delta=78.728511ms)
	I0829 18:56:13.239198   18990 fix.go:200] guest clock delta is within tolerance: 78.728511ms
	I0829 18:56:13.239202   18990 start.go:83] releasing machines lock for "addons-344587", held for 22.79014265s
	I0829 18:56:13.239220   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:13.239471   18990 main.go:141] libmachine: (addons-344587) Calling .GetIP
	I0829 18:56:13.241933   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.242288   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:13.242315   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.242500   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:13.243032   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:13.243240   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:13.243311   18990 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 18:56:13.243361   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:13.243466   18990 ssh_runner.go:195] Run: cat /version.json
	I0829 18:56:13.243481   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:13.245923   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.246013   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.246307   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:13.246336   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:13.246367   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.246384   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.246467   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:13.246620   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:13.246682   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:13.246812   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:13.246884   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:13.246957   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:13.247020   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:13.247050   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:13.348157   18990 ssh_runner.go:195] Run: systemctl --version
	I0829 18:56:13.354123   18990 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 18:56:13.512934   18990 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 18:56:13.518830   18990 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 18:56:13.518882   18990 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 18:56:13.534127   18990 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 18:56:13.534157   18990 start.go:495] detecting cgroup driver to use...
	I0829 18:56:13.534210   18990 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 18:56:13.549103   18990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 18:56:13.562524   18990 docker.go:217] disabling cri-docker service (if available) ...
	I0829 18:56:13.562603   18990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 18:56:13.575308   18990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 18:56:13.588019   18990 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 18:56:13.695971   18990 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 18:56:13.849315   18990 docker.go:233] disabling docker service ...
	I0829 18:56:13.849370   18990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 18:56:13.863202   18990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 18:56:13.876345   18990 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 18:56:13.998451   18990 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 18:56:14.110447   18990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 18:56:14.124269   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:56:14.142618   18990 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 18:56:14.142671   18990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:56:14.152550   18990 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 18:56:14.152638   18990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:56:14.162565   18990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:56:14.172204   18990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:56:14.182051   18990 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 18:56:14.191938   18990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:56:14.201619   18990 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:56:14.218380   18990 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:56:14.228433   18990 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 18:56:14.237357   18990 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 18:56:14.237406   18990 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 18:56:14.249575   18990 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 18:56:14.259454   18990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:56:14.369394   18990 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 18:56:14.456184   18990 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 18:56:14.456279   18990 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 18:56:14.460789   18990 start.go:563] Will wait 60s for crictl version
	I0829 18:56:14.460854   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:56:14.464432   18990 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 18:56:14.504874   18990 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 18:56:14.504990   18990 ssh_runner.go:195] Run: crio --version
	I0829 18:56:14.532672   18990 ssh_runner.go:195] Run: crio --version
	I0829 18:56:14.561543   18990 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 18:56:14.562632   18990 main.go:141] libmachine: (addons-344587) Calling .GetIP
	I0829 18:56:14.564933   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:14.565284   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:14.565303   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:14.565524   18990 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 18:56:14.569376   18990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:56:14.581262   18990 kubeadm.go:883] updating cluster {Name:addons-344587 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-344587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 18:56:14.581356   18990 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:56:14.581398   18990 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 18:56:14.613224   18990 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 18:56:14.613292   18990 ssh_runner.go:195] Run: which lz4
	I0829 18:56:14.617034   18990 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 18:56:14.621198   18990 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 18:56:14.621221   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 18:56:15.914421   18990 crio.go:462] duration metric: took 1.297408054s to copy over tarball
	I0829 18:56:15.914486   18990 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 18:56:18.044985   18990 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.130478632s)
	I0829 18:56:18.045014   18990 crio.go:469] duration metric: took 2.130566777s to extract the tarball
	I0829 18:56:18.045024   18990 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 18:56:18.081642   18990 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 18:56:18.123715   18990 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 18:56:18.123734   18990 cache_images.go:84] Images are preloaded, skipping loading
	I0829 18:56:18.123741   18990 kubeadm.go:934] updating node { 192.168.39.172 8443 v1.31.0 crio true true} ...
	I0829 18:56:18.123833   18990 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-344587 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-344587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 18:56:18.123903   18990 ssh_runner.go:195] Run: crio config
	I0829 18:56:18.173364   18990 cni.go:84] Creating CNI manager for ""
	I0829 18:56:18.173382   18990 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 18:56:18.173396   18990 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 18:56:18.173417   18990 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.172 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-344587 NodeName:addons-344587 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 18:56:18.173545   18990 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-344587"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.172
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.172"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 18:56:18.173599   18990 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 18:56:18.183496   18990 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 18:56:18.183559   18990 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 18:56:18.192837   18990 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0829 18:56:18.209828   18990 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 18:56:18.226818   18990 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0829 18:56:18.243177   18990 ssh_runner.go:195] Run: grep 192.168.39.172	control-plane.minikube.internal$ /etc/hosts
	I0829 18:56:18.246821   18990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.172	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:56:18.258454   18990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:56:18.380809   18990 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:56:18.399109   18990 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587 for IP: 192.168.39.172
	I0829 18:56:18.399130   18990 certs.go:194] generating shared ca certs ...
	I0829 18:56:18.399144   18990 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:18.399287   18990 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 18:56:18.507759   18990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt ...
	I0829 18:56:18.507786   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt: {Name:mkf2998f14816a9d649599681f5ace2bd3b15bb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:18.507943   18990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key ...
	I0829 18:56:18.507953   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key: {Name:mk0f1ef094971ea9c3f026c8290bde66a6036be5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:18.508026   18990 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 18:56:18.881398   18990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt ...
	I0829 18:56:18.881427   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt: {Name:mka4d0216f76512ed90b83996ade7ed626417b29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:18.881614   18990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key ...
	I0829 18:56:18.881630   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key: {Name:mka035b87075afcde930c062c2cb1875970dabb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:18.881727   18990 certs.go:256] generating profile certs ...
	I0829 18:56:18.881782   18990 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.key
	I0829 18:56:18.881793   18990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt with IP's: []
	I0829 18:56:19.191129   18990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt ...
	I0829 18:56:19.191157   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: {Name:mk595166ed3f22afaf54fdfb0b502bd573fc8143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:19.191339   18990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.key ...
	I0829 18:56:19.191354   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.key: {Name:mk24baca044bca79b73024c8a04b788113a0b022 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:19.191449   18990 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.key.2d54a6f6
	I0829 18:56:19.191470   18990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.crt.2d54a6f6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.172]
	I0829 18:56:19.236337   18990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.crt.2d54a6f6 ...
	I0829 18:56:19.236366   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.crt.2d54a6f6: {Name:mk40299d1f1b871b96fc8c21ef18cc9e856fbcfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:19.236555   18990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.key.2d54a6f6 ...
	I0829 18:56:19.236572   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.key.2d54a6f6: {Name:mk04263d045cce1f76651eeb698397ced0bec497 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:19.236669   18990 certs.go:381] copying /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.crt.2d54a6f6 -> /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.crt
	I0829 18:56:19.236739   18990 certs.go:385] copying /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.key.2d54a6f6 -> /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.key
	I0829 18:56:19.236796   18990 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/proxy-client.key
	I0829 18:56:19.236809   18990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/proxy-client.crt with IP's: []
	I0829 18:56:19.327890   18990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/proxy-client.crt ...
	I0829 18:56:19.327915   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/proxy-client.crt: {Name:mked626427b26604c6ca53369dde755937686f96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:19.328088   18990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/proxy-client.key ...
	I0829 18:56:19.328101   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/proxy-client.key: {Name:mk953deec79398c279f957cbebec5a918222e73e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:19.328285   18990 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 18:56:19.328319   18990 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 18:56:19.328339   18990 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 18:56:19.328360   18990 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 18:56:19.328883   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 18:56:19.352639   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 18:56:19.375448   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 18:56:19.397949   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 18:56:19.420280   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0829 18:56:19.445420   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 18:56:19.469941   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 18:56:19.493954   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 18:56:19.517481   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 18:56:19.540749   18990 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 18:56:19.557225   18990 ssh_runner.go:195] Run: openssl version
	I0829 18:56:19.563530   18990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 18:56:19.574899   18990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:56:19.579661   18990 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:56:19.579718   18990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:56:19.585781   18990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 18:56:19.596492   18990 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 18:56:19.600908   18990 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 18:56:19.600959   18990 kubeadm.go:392] StartCluster: {Name:addons-344587 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-344587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:56:19.601045   18990 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 18:56:19.601093   18990 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 18:56:19.637569   18990 cri.go:89] found id: ""
	I0829 18:56:19.637643   18990 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 18:56:19.647689   18990 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 18:56:19.657011   18990 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 18:56:19.666328   18990 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 18:56:19.666343   18990 kubeadm.go:157] found existing configuration files:
	
	I0829 18:56:19.666376   18990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 18:56:19.675716   18990 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 18:56:19.675775   18990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 18:56:19.685386   18990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 18:56:19.694416   18990 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 18:56:19.694471   18990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 18:56:19.703922   18990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 18:56:19.712826   18990 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 18:56:19.712873   18990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 18:56:19.722059   18990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 18:56:19.731001   18990 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 18:56:19.731050   18990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 18:56:19.740296   18990 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 18:56:19.794947   18990 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 18:56:19.795099   18990 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 18:56:19.898282   18990 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 18:56:19.898409   18990 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 18:56:19.898526   18990 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 18:56:19.907493   18990 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 18:56:19.997177   18990 out.go:235]   - Generating certificates and keys ...
	I0829 18:56:19.997273   18990 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 18:56:19.997359   18990 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 18:56:19.997433   18990 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0829 18:56:20.334115   18990 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0829 18:56:20.488051   18990 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0829 18:56:20.567263   18990 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0829 18:56:20.715089   18990 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0829 18:56:20.715281   18990 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-344587 localhost] and IPs [192.168.39.172 127.0.0.1 ::1]
	I0829 18:56:21.029598   18990 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0829 18:56:21.029765   18990 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-344587 localhost] and IPs [192.168.39.172 127.0.0.1 ::1]
	I0829 18:56:21.106114   18990 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0829 18:56:21.317964   18990 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0829 18:56:21.407628   18990 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0829 18:56:21.407696   18990 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 18:56:21.629562   18990 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 18:56:21.754916   18990 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 18:56:21.931143   18990 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 18:56:22.124355   18990 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 18:56:22.279253   18990 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 18:56:22.279642   18990 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 18:56:22.282088   18990 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 18:56:22.284191   18990 out.go:235]   - Booting up control plane ...
	I0829 18:56:22.284310   18990 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 18:56:22.284403   18990 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 18:56:22.284482   18990 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 18:56:22.304603   18990 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 18:56:22.312804   18990 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 18:56:22.312862   18990 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 18:56:22.435203   18990 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 18:56:22.435353   18990 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 18:56:22.936484   18990 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.995362ms
	I0829 18:56:22.936601   18990 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 18:56:27.436410   18990 kubeadm.go:310] [api-check] The API server is healthy after 4.501398688s
	I0829 18:56:27.454666   18990 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 18:56:27.477429   18990 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 18:56:27.508526   18990 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 18:56:27.508785   18990 kubeadm.go:310] [mark-control-plane] Marking the node addons-344587 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 18:56:27.520541   18990 kubeadm.go:310] [bootstrap-token] Using token: q9x0a1.3m9323w9pql012fx
	I0829 18:56:27.521864   18990 out.go:235]   - Configuring RBAC rules ...
	I0829 18:56:27.521972   18990 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 18:56:27.526125   18990 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 18:56:27.535702   18990 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 18:56:27.539676   18990 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 18:56:27.542568   18990 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 18:56:27.548387   18990 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 18:56:27.844139   18990 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 18:56:28.295458   18990 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 18:56:28.840023   18990 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 18:56:28.840956   18990 kubeadm.go:310] 
	I0829 18:56:28.841053   18990 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 18:56:28.841063   18990 kubeadm.go:310] 
	I0829 18:56:28.841160   18990 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 18:56:28.841186   18990 kubeadm.go:310] 
	I0829 18:56:28.841234   18990 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 18:56:28.841322   18990 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 18:56:28.841395   18990 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 18:56:28.841404   18990 kubeadm.go:310] 
	I0829 18:56:28.841484   18990 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 18:56:28.841493   18990 kubeadm.go:310] 
	I0829 18:56:28.841553   18990 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 18:56:28.841567   18990 kubeadm.go:310] 
	I0829 18:56:28.841651   18990 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 18:56:28.841761   18990 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 18:56:28.841862   18990 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 18:56:28.841873   18990 kubeadm.go:310] 
	I0829 18:56:28.841975   18990 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 18:56:28.842087   18990 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 18:56:28.842100   18990 kubeadm.go:310] 
	I0829 18:56:28.842176   18990 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token q9x0a1.3m9323w9pql012fx \
	I0829 18:56:28.842267   18990 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef \
	I0829 18:56:28.842293   18990 kubeadm.go:310] 	--control-plane 
	I0829 18:56:28.842300   18990 kubeadm.go:310] 
	I0829 18:56:28.842391   18990 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 18:56:28.842407   18990 kubeadm.go:310] 
	I0829 18:56:28.842491   18990 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token q9x0a1.3m9323w9pql012fx \
	I0829 18:56:28.842651   18990 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef 
	I0829 18:56:28.843797   18990 kubeadm.go:310] W0829 18:56:19.773325     811 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:56:28.844133   18990 kubeadm.go:310] W0829 18:56:19.774616     811 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:56:28.844272   18990 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 18:56:28.844299   18990 cni.go:84] Creating CNI manager for ""
	I0829 18:56:28.844312   18990 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 18:56:28.846071   18990 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 18:56:28.847426   18990 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 18:56:28.857688   18990 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 18:56:28.878902   18990 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 18:56:28.878958   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:28.878992   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-344587 minikube.k8s.io/updated_at=2024_08_29T18_56_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033 minikube.k8s.io/name=addons-344587 minikube.k8s.io/primary=true
	I0829 18:56:28.907190   18990 ops.go:34] apiserver oom_adj: -16
	I0829 18:56:29.042348   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:29.543055   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:30.042999   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:30.542653   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:31.042960   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:31.542779   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:32.042560   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:32.543114   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:33.043338   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:33.542400   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:33.657610   18990 kubeadm.go:1113] duration metric: took 4.778691649s to wait for elevateKubeSystemPrivileges
	I0829 18:56:33.657651   18990 kubeadm.go:394] duration metric: took 14.056694589s to StartCluster
	I0829 18:56:33.657673   18990 settings.go:142] acquiring lock: {Name:mka4cd5ddff5796cd0ca11509c181178f4f73529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:33.657802   18990 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 18:56:33.658294   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:33.658498   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0829 18:56:33.658563   18990 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:56:33.658614   18990 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0829 18:56:33.658712   18990 addons.go:69] Setting yakd=true in profile "addons-344587"
	I0829 18:56:33.658734   18990 config.go:182] Loaded profile config "addons-344587": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:56:33.658743   18990 addons.go:234] Setting addon yakd=true in "addons-344587"
	I0829 18:56:33.658752   18990 addons.go:69] Setting helm-tiller=true in profile "addons-344587"
	I0829 18:56:33.658761   18990 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-344587"
	I0829 18:56:33.658774   18990 addons.go:69] Setting registry=true in profile "addons-344587"
	I0829 18:56:33.658781   18990 addons.go:69] Setting gcp-auth=true in profile "addons-344587"
	I0829 18:56:33.658782   18990 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-344587"
	I0829 18:56:33.658779   18990 addons.go:69] Setting cloud-spanner=true in profile "addons-344587"
	I0829 18:56:33.658790   18990 addons.go:69] Setting volumesnapshots=true in profile "addons-344587"
	I0829 18:56:33.658799   18990 mustload.go:65] Loading cluster: addons-344587
	I0829 18:56:33.658800   18990 addons.go:234] Setting addon registry=true in "addons-344587"
	I0829 18:56:33.658807   18990 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-344587"
	I0829 18:56:33.658811   18990 addons.go:234] Setting addon cloud-spanner=true in "addons-344587"
	I0829 18:56:33.658813   18990 addons.go:234] Setting addon volumesnapshots=true in "addons-344587"
	I0829 18:56:33.658831   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.658833   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.658834   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.658837   18990 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-344587"
	I0829 18:56:33.658846   18990 addons.go:69] Setting ingress=true in profile "addons-344587"
	I0829 18:56:33.658863   18990 addons.go:234] Setting addon ingress=true in "addons-344587"
	I0829 18:56:33.658865   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.658889   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.658925   18990 config.go:182] Loaded profile config "addons-344587": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:56:33.658836   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.658783   18990 addons.go:69] Setting volcano=true in profile "addons-344587"
	I0829 18:56:33.659252   18990 addons.go:234] Setting addon volcano=true in "addons-344587"
	I0829 18:56:33.659252   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.659267   18990 addons.go:69] Setting storage-provisioner=true in profile "addons-344587"
	I0829 18:56:33.659273   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.659282   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659289   18990 addons.go:234] Setting addon storage-provisioner=true in "addons-344587"
	I0829 18:56:33.659291   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.659309   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.659322   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.659337   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659368   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.659400   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659432   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.659479   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659549   18990 addons.go:234] Setting addon helm-tiller=true in "addons-344587"
	I0829 18:56:33.658761   18990 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-344587"
	I0829 18:56:33.659610   18990 addons.go:69] Setting inspektor-gadget=true in profile "addons-344587"
	I0829 18:56:33.659640   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.659688   18990 addons.go:234] Setting addon inspektor-gadget=true in "addons-344587"
	I0829 18:56:33.659310   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659869   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.660003   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.660033   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659615   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.660105   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.660245   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.660276   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659252   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.660606   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659251   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.661085   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.658772   18990 addons.go:69] Setting default-storageclass=true in profile "addons-344587"
	I0829 18:56:33.670709   18990 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-344587"
	I0829 18:56:33.658775   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.659589   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.670898   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659619   18990 addons.go:69] Setting ingress-dns=true in profile "addons-344587"
	I0829 18:56:33.671005   18990 addons.go:234] Setting addon ingress-dns=true in "addons-344587"
	I0829 18:56:33.659626   18990 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-344587"
	I0829 18:56:33.671326   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.671369   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.671373   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.671404   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.671440   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659631   18990 addons.go:69] Setting metrics-server=true in profile "addons-344587"
	I0829 18:56:33.671513   18990 addons.go:234] Setting addon metrics-server=true in "addons-344587"
	I0829 18:56:33.671545   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.666650   18990 out.go:177] * Verifying Kubernetes components...
	I0829 18:56:33.671400   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.671874   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.671911   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.671053   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.673380   18990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:56:33.680875   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43131
	I0829 18:56:33.681397   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.682001   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.682020   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.682079   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43679
	I0829 18:56:33.682558   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.682826   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33189
	I0829 18:56:33.683408   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.683427   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.683497   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.683572   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.684019   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.684047   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.684281   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.684297   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.684418   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.684576   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42785
	I0829 18:56:33.684658   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.685239   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.686589   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.686991   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.687043   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.687652   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.687695   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.693010   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41217
	I0829 18:56:33.693441   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.693863   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.693883   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.694222   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.695000   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.695017   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.695025   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.695047   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.695732   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.696286   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.696303   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.696680   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.697196   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.697227   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.708509   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35881
	I0829 18:56:33.709301   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.710007   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.710025   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.710503   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.711094   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.711134   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.712697   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38887
	I0829 18:56:33.713230   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.713833   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.713849   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.714249   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.714858   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.714894   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.717065   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40851
	I0829 18:56:33.718427   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40143
	I0829 18:56:33.719007   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.719015   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.719564   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.719572   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.719583   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.719587   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.719642   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34313
	I0829 18:56:33.719977   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.720035   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.720529   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.720541   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.720576   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.720813   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41587
	I0829 18:56:33.721306   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.721600   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.721641   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.721789   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.721802   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.722188   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.722210   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.722517   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.722694   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.722698   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.722766   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40775
	I0829 18:56:33.722933   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.723536   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.723992   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.724014   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.724373   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.724931   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.724969   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.727773   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42855
	I0829 18:56:33.728271   18990 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-344587"
	I0829 18:56:33.728315   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.728668   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.728713   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.729106   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.730100   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.730122   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.730424   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.731053   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.731091   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.733084   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34629
	I0829 18:56:33.733526   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.733981   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.734008   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.734333   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.734516   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.737054   18990 addons.go:234] Setting addon default-storageclass=true in "addons-344587"
	I0829 18:56:33.737093   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.737442   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.737488   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.738915   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39099
	I0829 18:56:33.739299   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.739833   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.739858   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.740211   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.740415   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.740698   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44413
	I0829 18:56:33.742219   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.742936   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.743473   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.743489   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.743850   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.744037   18990 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0829 18:56:33.744401   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.744444   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.746700   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39535
	I0829 18:56:33.747098   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.747483   18990 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:56:33.747541   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.747555   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.747849   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.748004   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.749857   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.750156   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:33.750162   18990 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:56:33.750176   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:33.750401   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:33.750415   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:33.750424   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:33.750431   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:33.751640   18990 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 18:56:33.751661   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0829 18:56:33.751678   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.753152   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41841
	I0829 18:56:33.753646   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.754124   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.754140   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.754697   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:33.754792   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.754874   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:33.754882   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	W0829 18:56:33.754961   18990 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0829 18:56:33.755222   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.756506   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.757125   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.757153   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.757371   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.757578   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.757800   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.758075   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.758388   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43277
	I0829 18:56:33.758562   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.758742   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.759174   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.759196   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.759539   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.759780   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.760361   18990 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0829 18:56:33.761354   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.761672   18990 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0829 18:56:33.761690   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0829 18:56:33.761708   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.762934   18990 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0829 18:56:33.764151   18990 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0829 18:56:33.764172   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0829 18:56:33.764189   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.765442   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.766030   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.766066   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.766227   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.766373   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.766477   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.766582   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.769791   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.770164   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40387
	I0829 18:56:33.770308   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.770326   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.770497   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.770711   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.770718   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.770896   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.770957   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44035
	I0829 18:56:33.771212   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.771224   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.771288   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.771479   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.772071   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.772254   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.772754   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34945
	I0829 18:56:33.773389   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.773405   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.773886   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.774057   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.774157   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.774715   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.774741   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.775380   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.775427   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.775749   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45059
	I0829 18:56:33.775868   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.776433   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.776878   18990 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0829 18:56:33.777144   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.777191   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.777462   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.777480   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.777870   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.778380   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.779809   18990 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0829 18:56:33.780037   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.780244   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35549
	I0829 18:56:33.780742   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.781222   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.781248   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.781369   18990 out.go:177]   - Using image docker.io/registry:2.8.3
	I0829 18:56:33.781563   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.781761   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.782643   18990 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0829 18:56:33.783896   18990 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0829 18:56:33.783960   18990 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0829 18:56:33.784513   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39187
	I0829 18:56:33.784537   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44109
	I0829 18:56:33.784631   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.785647   18990 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0829 18:56:33.785668   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0829 18:56:33.785684   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.786509   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35783
	I0829 18:56:33.786516   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.786797   18990 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0829 18:56:33.786858   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.787121   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.787314   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.787336   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.787455   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.787473   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.787695   18990 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0829 18:56:33.787783   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.787912   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.787932   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.788077   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.788137   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.788311   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.788468   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.788893   18990 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0829 18:56:33.788909   18990 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0829 18:56:33.788928   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.788930   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.788964   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.789455   18990 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0829 18:56:33.790036   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.790306   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.791044   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.791435   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.791870   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.791965   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.792018   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.792283   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.792425   18990 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0829 18:56:33.792449   18990 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0829 18:56:33.792452   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.794946   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.794990   18990 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0829 18:56:33.795455   18990 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0829 18:56:33.795469   18990 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0829 18:56:33.795488   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.796133   18990 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0829 18:56:33.796965   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36103
	I0829 18:56:33.797111   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.797138   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.797278   18990 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0829 18:56:33.797280   18990 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0829 18:56:33.797524   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.797291   18990 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0829 18:56:33.797627   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.798327   18990 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0829 18:56:33.798342   18990 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0829 18:56:33.798363   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.798941   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.798951   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.799251   18990 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:56:33.799263   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0829 18:56:33.799281   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.799598   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.800343   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.800449   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.800465   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.801716   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.801738   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36645
	I0829 18:56:33.801793   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38425
	I0829 18:56:33.802890   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.803204   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.803825   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.803851   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.805182   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.805201   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.805215   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39841
	I0829 18:56:33.805244   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.805256   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.805307   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.805329   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.805337   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.805789   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.805807   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.805811   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.805825   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.805840   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.805855   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.805882   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.806005   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.806215   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.806223   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.806256   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.806267   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.806359   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.806379   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.806397   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.806440   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.806668   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.806692   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.806708   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.806860   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.806874   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.806956   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.807013   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.807169   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.807174   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.807190   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.807219   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.807331   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.807354   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.807446   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.807495   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.807556   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.807727   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.808112   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.808136   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.809511   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.810029   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.811278   18990 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0829 18:56:33.812241   18990 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 18:56:33.813129   18990 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 18:56:33.813157   18990 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 18:56:33.813176   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.813977   18990 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:56:33.814001   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 18:56:33.814020   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.816395   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.816894   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.816918   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.817062   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.817195   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46807
	I0829 18:56:33.817338   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.817486   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.817534   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.817610   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.817837   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.818145   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.818173   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.818354   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.818387   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.818455   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.818655   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.818809   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.818859   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.818932   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.819357   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.820909   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	W0829 18:56:33.822691   18990 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0829 18:56:33.822718   18990 retry.go:31] will retry after 259.989848ms: ssh: handshake failed: EOF
	I0829 18:56:33.823215   18990 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0829 18:56:33.824471   18990 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 18:56:33.824488   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0829 18:56:33.824506   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.827030   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37461
	I0829 18:56:33.827117   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35417
	I0829 18:56:33.827401   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.827435   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.827587   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.827788   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.827812   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.827906   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.827920   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.828022   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.828075   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.828084   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.828178   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.828317   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.828370   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.828385   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.828540   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.828549   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.828724   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.830078   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.830313   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.830515   18990 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 18:56:33.830530   18990 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 18:56:33.830569   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.831752   18990 out.go:177]   - Using image docker.io/busybox:stable
	I0829 18:56:33.832911   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.833222   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.833244   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.833366   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.833520   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.833767   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.833891   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.834507   18990 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	W0829 18:56:33.835031   18990 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36686->192.168.39.172:22: read: connection reset by peer
	I0829 18:56:33.835058   18990 retry.go:31] will retry after 162.890781ms: ssh: handshake failed: read tcp 192.168.39.1:36686->192.168.39.172:22: read: connection reset by peer
	I0829 18:56:33.835835   18990 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:56:33.835851   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0829 18:56:33.835865   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.838727   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.839112   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.839133   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.839296   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.839446   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.839562   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.839657   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	W0829 18:56:33.848450   18990 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36700->192.168.39.172:22: read: connection reset by peer
	I0829 18:56:33.848470   18990 retry.go:31] will retry after 306.282122ms: ssh: handshake failed: read tcp 192.168.39.1:36700->192.168.39.172:22: read: connection reset by peer
	W0829 18:56:33.999144   18990 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36716->192.168.39.172:22: read: connection reset by peer
	I0829 18:56:33.999169   18990 retry.go:31] will retry after 424.61405ms: ssh: handshake failed: read tcp 192.168.39.1:36716->192.168.39.172:22: read: connection reset by peer
	I0829 18:56:34.150588   18990 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0829 18:56:34.150609   18990 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0829 18:56:34.160016   18990 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0829 18:56:34.160035   18990 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0829 18:56:34.209643   18990 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0829 18:56:34.209668   18990 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0829 18:56:34.212743   18990 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0829 18:56:34.212768   18990 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0829 18:56:34.219438   18990 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0829 18:56:34.219459   18990 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0829 18:56:34.225374   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:56:34.251243   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 18:56:34.341500   18990 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 18:56:34.341523   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0829 18:56:34.345542   18990 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0829 18:56:34.345561   18990 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0829 18:56:34.360911   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 18:56:34.367165   18990 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:56:34.367441   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0829 18:56:34.408618   18990 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:56:34.408647   18990 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0829 18:56:34.412998   18990 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0829 18:56:34.413014   18990 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0829 18:56:34.414485   18990 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0829 18:56:34.414505   18990 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0829 18:56:34.416656   18990 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0829 18:56:34.416674   18990 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0829 18:56:34.421178   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0829 18:56:34.441682   18990 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0829 18:56:34.441714   18990 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0829 18:56:34.537974   18990 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:56:34.537995   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0829 18:56:34.574614   18990 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 18:56:34.574648   18990 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 18:56:34.590779   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:56:34.613418   18990 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0829 18:56:34.613450   18990 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0829 18:56:34.621890   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:56:34.650966   18990 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0829 18:56:34.650990   18990 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0829 18:56:34.656296   18990 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0829 18:56:34.656330   18990 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0829 18:56:34.662014   18990 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0829 18:56:34.662031   18990 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0829 18:56:34.755855   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:56:34.777852   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:56:34.841238   18990 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0829 18:56:34.841264   18990 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0829 18:56:34.860870   18990 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0829 18:56:34.860892   18990 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0829 18:56:34.865493   18990 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:56:34.865518   18990 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 18:56:34.891256   18990 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:56:34.891275   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0829 18:56:34.918114   18990 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0829 18:56:34.918134   18990 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0829 18:56:35.013931   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 18:56:35.035330   18990 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:56:35.035353   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0829 18:56:35.037518   18990 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0829 18:56:35.037536   18990 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0829 18:56:35.083605   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:56:35.084671   18990 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0829 18:56:35.084696   18990 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0829 18:56:35.109523   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:56:35.214128   18990 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0829 18:56:35.214165   18990 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0829 18:56:35.300619   18990 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0829 18:56:35.300639   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0829 18:56:35.319430   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:56:35.382473   18990 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:56:35.382493   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0829 18:56:35.500157   18990 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0829 18:56:35.500185   18990 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0829 18:56:35.637217   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:56:35.652837   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.401553749s)
	I0829 18:56:35.652892   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:35.652903   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:35.652993   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.427586081s)
	I0829 18:56:35.653037   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:35.653049   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:35.653235   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:35.653247   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:35.653256   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:35.653263   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:35.653306   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:35.653327   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:35.653336   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:35.653344   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:35.653512   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:35.653530   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:35.653545   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:35.653571   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:35.653594   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:35.653607   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:35.707655   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:35.707678   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:35.708069   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:35.708091   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:35.708109   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:35.793211   18990 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0829 18:56:35.793237   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0829 18:56:35.962496   18990 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0829 18:56:35.962518   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0829 18:56:36.268015   18990 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:56:36.268034   18990 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0829 18:56:36.448977   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:56:40.901860   18990 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0829 18:56:40.901896   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:40.905410   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:40.905896   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:40.905938   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:40.906115   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:40.906299   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:40.906451   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:40.906626   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:41.427728   18990 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0829 18:56:41.620090   18990 addons.go:234] Setting addon gcp-auth=true in "addons-344587"
	I0829 18:56:41.620153   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:41.620485   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:41.620517   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:41.636187   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44453
	I0829 18:56:41.636532   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:41.637102   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:41.637131   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:41.637428   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:41.638003   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:41.638024   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:41.669951   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44101
	I0829 18:56:41.670306   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:41.670883   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:41.670910   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:41.671238   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:41.671489   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:41.673184   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:41.673393   18990 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0829 18:56:41.673419   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:41.676410   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:41.676882   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:41.676905   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:41.677058   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:41.677211   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:41.677367   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:41.677483   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:42.669306   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.308358481s)
	I0829 18:56:42.669349   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.669347   18990 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.301873517s)
	I0829 18:56:42.669360   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.669375   18990 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0829 18:56:42.669424   18990 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.302234847s)
	I0829 18:56:42.669452   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.248249274s)
	I0829 18:56:42.669475   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.669484   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.669555   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.078738309s)
	I0829 18:56:42.669594   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.669607   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.669707   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.047793159s)
	I0829 18:56:42.669725   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.669732   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.669822   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.913945077s)
	I0829 18:56:42.669839   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.669846   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.669914   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.892029945s)
	I0829 18:56:42.669930   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.669939   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.669947   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.669962   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.669971   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.669978   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.670011   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.656061298s)
	I0829 18:56:42.670027   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.670034   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.670035   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.670044   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.670052   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.670058   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.669916   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.670134   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.586501236s)
	I0829 18:56:42.670151   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.670158   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.670232   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.560685281s)
	I0829 18:56:42.670257   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.670266   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.670396   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.35093677s)
	W0829 18:56:42.670434   18990 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:56:42.670454   18990 retry.go:31] will retry after 369.342261ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:56:42.670554   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.670555   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.033295535s)
	I0829 18:56:42.670551   18990 node_ready.go:35] waiting up to 6m0s for node "addons-344587" to be "Ready" ...
	I0829 18:56:42.670566   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.670575   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.670576   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.670583   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.670595   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.670666   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.670668   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.670689   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.670690   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.670697   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.670698   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.670706   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.670706   18990 addons.go:475] Verifying addon ingress=true in "addons-344587"
	I0829 18:56:42.670713   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.671349   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.671369   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.671394   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.671401   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.671409   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.671418   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.671484   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.671504   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.671511   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.671518   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.671524   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.671559   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.671575   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.671582   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.671589   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.671595   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.671628   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.671644   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.671692   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.673185   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.673214   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.673219   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.673225   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.673232   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.674001   18990 out.go:177] * Verifying ingress addon...
	I0829 18:56:42.674369   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.674390   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.674394   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.674400   18990 addons.go:475] Verifying addon registry=true in "addons-344587"
	I0829 18:56:42.674749   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.674780   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.674788   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.675112   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.675155   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.675162   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.675170   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.675177   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.675236   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.675254   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.675260   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.675267   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.675273   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.675309   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.675327   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.675334   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.675407   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.675498   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.675505   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.675737   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.675746   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.675813   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.675820   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.676275   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.676288   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.676537   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.676546   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.676554   18990 addons.go:475] Verifying addon metrics-server=true in "addons-344587"
	I0829 18:56:42.677460   18990 out.go:177] * Verifying registry addon...
	I0829 18:56:42.678505   18990 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0829 18:56:42.678573   18990 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-344587 service yakd-dashboard -n yakd-dashboard
	
	I0829 18:56:42.679342   18990 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0829 18:56:42.721945   18990 node_ready.go:49] node "addons-344587" has status "Ready":"True"
	I0829 18:56:42.721968   18990 node_ready.go:38] duration metric: took 51.397004ms for node "addons-344587" to be "Ready" ...
	I0829 18:56:42.721979   18990 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:56:42.738146   18990 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0829 18:56:42.738171   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:42.738232   18990 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0829 18:56:42.738245   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:42.763259   18990 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fljpw" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:42.780817   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.780844   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.781106   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.781123   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:43.040019   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:56:43.181593   18990 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-344587" context rescaled to 1 replicas
	I0829 18:56:43.200300   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:43.202392   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:43.854515   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:43.855345   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:44.201644   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:44.202443   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:44.387997   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.938966875s)
	I0829 18:56:44.388024   18990 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.714610192s)
	I0829 18:56:44.388060   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:44.388076   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:44.388398   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:44.388399   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:44.388423   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:44.388436   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:44.388446   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:44.388660   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:44.388693   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:44.388708   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:44.388728   18990 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-344587"
	I0829 18:56:44.390289   18990 out.go:177] * Verifying csi-hostpath-driver addon...
	I0829 18:56:44.390333   18990 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:56:44.391916   18990 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0829 18:56:44.392479   18990 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0829 18:56:44.393017   18990 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0829 18:56:44.393047   18990 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0829 18:56:44.437296   18990 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0829 18:56:44.437316   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:44.516363   18990 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0829 18:56:44.516386   18990 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0829 18:56:44.625475   18990 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:56:44.625495   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0829 18:56:44.694404   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:56:44.710971   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:44.714864   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:44.801891   18990 pod_ready.go:103] pod "coredns-6f6b679f8f-fljpw" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:44.896516   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:45.184541   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:45.185775   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:45.398848   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:45.581703   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.541635237s)
	I0829 18:56:45.581752   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:45.581767   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:45.582058   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:45.582084   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:45.582095   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:45.582102   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:45.582151   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:45.582390   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:45.582429   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:45.582481   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:45.685365   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:45.685844   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:45.900158   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:46.159769   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.465331058s)
	I0829 18:56:46.159816   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:46.159836   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:46.160177   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:46.160214   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:46.160224   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:46.160233   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:46.160240   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:46.160479   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:46.160495   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:46.160504   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:46.161466   18990 addons.go:475] Verifying addon gcp-auth=true in "addons-344587"
	I0829 18:56:46.163120   18990 out.go:177] * Verifying gcp-auth addon...
	I0829 18:56:46.165519   18990 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0829 18:56:46.185169   18990 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0829 18:56:46.185187   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:46.226445   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:46.226499   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:46.398205   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:46.671641   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:46.687495   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:46.688298   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:46.899177   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:47.168846   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:47.269193   18990 pod_ready.go:103] pod "coredns-6f6b679f8f-fljpw" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:47.270024   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:47.270716   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:47.398276   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:47.669190   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:47.682897   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:47.683324   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:47.897509   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:48.169032   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:48.184274   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:48.184383   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:48.397220   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:48.669220   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:48.682380   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:48.683472   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:48.896581   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:49.175691   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:49.183464   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:49.184739   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:49.635636   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:49.636948   18990 pod_ready.go:103] pod "coredns-6f6b679f8f-fljpw" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:49.668411   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:49.682494   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:49.683366   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:49.896900   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:50.169141   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:50.182913   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:50.183797   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:50.397440   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:50.669246   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:50.682992   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:50.683296   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:50.766028   18990 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-fljpw" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-fljpw" not found
	I0829 18:56:50.766050   18990 pod_ready.go:82] duration metric: took 8.00276735s for pod "coredns-6f6b679f8f-fljpw" in "kube-system" namespace to be "Ready" ...
	E0829 18:56:50.766059   18990 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-fljpw" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-fljpw" not found
	I0829 18:56:50.766065   18990 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-t9nhw" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.771805   18990 pod_ready.go:93] pod "coredns-6f6b679f8f-t9nhw" in "kube-system" namespace has status "Ready":"True"
	I0829 18:56:50.771843   18990 pod_ready.go:82] duration metric: took 5.770841ms for pod "coredns-6f6b679f8f-t9nhw" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.771858   18990 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-344587" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.778901   18990 pod_ready.go:93] pod "etcd-addons-344587" in "kube-system" namespace has status "Ready":"True"
	I0829 18:56:50.778924   18990 pod_ready.go:82] duration metric: took 7.055033ms for pod "etcd-addons-344587" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.778933   18990 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-344587" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.787991   18990 pod_ready.go:93] pod "kube-apiserver-addons-344587" in "kube-system" namespace has status "Ready":"True"
	I0829 18:56:50.788017   18990 pod_ready.go:82] duration metric: took 9.072661ms for pod "kube-apiserver-addons-344587" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.788030   18990 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-344587" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.795671   18990 pod_ready.go:93] pod "kube-controller-manager-addons-344587" in "kube-system" namespace has status "Ready":"True"
	I0829 18:56:50.795689   18990 pod_ready.go:82] duration metric: took 7.649451ms for pod "kube-controller-manager-addons-344587" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.795700   18990 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lgcxw" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.898617   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:50.968239   18990 pod_ready.go:93] pod "kube-proxy-lgcxw" in "kube-system" namespace has status "Ready":"True"
	I0829 18:56:50.968267   18990 pod_ready.go:82] duration metric: took 172.559179ms for pod "kube-proxy-lgcxw" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.968280   18990 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-344587" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:51.170579   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:51.183357   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:51.183460   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:51.367505   18990 pod_ready.go:93] pod "kube-scheduler-addons-344587" in "kube-system" namespace has status "Ready":"True"
	I0829 18:56:51.367538   18990 pod_ready.go:82] duration metric: took 399.24913ms for pod "kube-scheduler-addons-344587" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:51.367550   18990 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:51.397192   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:51.761866   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:51.761991   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:51.762363   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:51.896480   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:52.169660   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:52.186439   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:52.186848   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:52.397676   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:52.669400   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:52.682397   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:52.682617   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:52.897411   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:53.169721   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:53.184323   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:53.184737   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:53.374149   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:53.397202   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:53.668898   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:53.682445   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:53.682704   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:53.897088   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:54.172913   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:54.182087   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:54.184118   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:54.397034   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:54.669527   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:54.683280   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:54.683838   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:54.897205   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:55.169589   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:55.183889   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:55.184209   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:55.375125   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:55.397322   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:55.668681   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:55.682670   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:55.683015   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:56.069249   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:56.168996   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:56.183093   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:56.183177   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:56.397533   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:56.670051   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:56.682495   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:56.683368   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:56.897459   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:57.169182   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:57.183116   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:57.184347   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:57.376144   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:57.397074   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:57.670186   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:57.683268   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:57.684614   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:57.897006   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:58.170523   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:58.183070   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:58.183215   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:58.396251   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:58.670888   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:58.779673   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:58.780745   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:58.902984   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:59.172390   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:59.183921   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:59.186092   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:59.396707   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:59.669214   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:59.685853   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:59.685887   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:59.873857   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:59.896170   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:00.169557   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:00.183863   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:00.184197   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:00.396380   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:00.669459   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:00.682973   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:00.684561   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:00.897013   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:01.169578   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:01.183325   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:01.183954   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:01.397283   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:01.669328   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:01.683006   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:01.683150   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:01.896786   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:02.169870   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:02.183250   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:02.184348   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:02.395075   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:02.397237   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:02.669855   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:02.686242   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:02.688366   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:02.898895   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:03.169561   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:03.184647   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:03.185289   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:03.397046   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:03.669112   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:03.682276   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:03.682683   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:03.897109   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:04.168963   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:04.182332   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:04.183973   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:04.396975   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:04.669614   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:04.683124   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:04.683347   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:04.873985   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:04.896943   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:05.169958   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:05.183528   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:05.187121   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:05.398577   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:05.669571   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:05.683191   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:05.683832   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:05.896841   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:06.169501   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:06.183520   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:06.184624   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:06.398361   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:06.668833   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:06.683988   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:06.684316   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:06.897127   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:07.169829   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:07.181726   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:07.183130   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:07.374378   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:07.397367   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:07.850766   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:07.851123   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:07.851346   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:07.897642   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:08.169471   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:08.183648   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:08.184288   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:08.396754   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:08.668753   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:08.683569   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:08.683850   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:08.896776   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:09.169838   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:09.184520   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:09.184757   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:09.561873   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:09.567669   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:09.669578   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:09.683274   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:09.683280   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:09.897002   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:10.169044   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:10.181987   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:10.182399   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:10.397928   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:10.669457   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:10.683541   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:10.683807   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:10.896493   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:11.168933   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:11.183346   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:11.184965   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:11.397290   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:11.669440   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:11.682915   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:11.684471   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:11.873117   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:11.896880   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:12.462502   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:12.462504   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:12.462546   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:12.462815   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:12.669512   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:12.682339   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:12.682757   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:12.896921   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:13.169375   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:13.182937   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:13.183252   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:13.396471   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:13.669340   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:13.682743   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:13.683149   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:13.897633   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:14.169741   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:14.183149   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:14.183611   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:14.373233   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:14.397241   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:14.669880   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:14.684196   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:14.685153   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:14.897486   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:15.168735   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:15.182856   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:15.183768   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:15.396668   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:15.669109   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:15.682395   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:15.683471   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:15.896837   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:16.168703   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:16.183373   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:16.185053   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:16.665051   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:16.676846   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:16.766323   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:16.766619   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:16.766726   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:16.901939   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:17.172656   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:17.182759   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:17.182930   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:17.396937   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:17.670486   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:17.683357   18990 kapi.go:107] duration metric: took 35.004012687s to wait for kubernetes.io/minikube-addons=registry ...
	I0829 18:57:17.683499   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:17.896856   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:18.169838   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:18.181982   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:18.398377   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:18.669201   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:18.682926   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:18.873544   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:18.897487   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:19.169391   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:19.182271   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:19.397321   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:19.671702   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:19.683428   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:19.896931   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:20.169472   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:20.184524   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:20.396986   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:20.669107   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:20.682955   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:20.874152   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:20.897581   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:21.169129   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:21.183119   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:21.397077   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:21.670237   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:21.682417   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:21.896731   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:22.169136   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:22.183327   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:22.399358   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:22.669025   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:22.682684   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:22.897209   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:23.168554   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:23.183317   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:23.376737   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:23.398862   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:23.669638   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:23.684290   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:23.896788   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:24.168867   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:24.182641   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:24.397360   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:24.669632   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:24.682814   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:24.896660   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:25.169124   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:25.182227   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:25.397300   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:25.669095   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:25.682571   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:25.877447   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:25.897875   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:26.171514   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:26.182786   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:26.399838   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:26.671735   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:26.683905   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:26.899821   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:27.169555   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:27.182826   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:27.397336   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:27.740885   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:27.742666   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:27.880815   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:27.897328   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:28.168851   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:28.182865   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:28.396062   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:28.669492   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:28.683095   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:28.897589   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:29.169448   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:29.183980   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:29.397702   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:29.670608   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:29.773227   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:29.897807   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:30.169552   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:30.182629   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:30.373870   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:30.396379   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:30.669796   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:30.683989   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:30.897362   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:31.174101   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:31.183759   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:31.396966   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:31.753369   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:31.770022   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:31.897431   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:32.169668   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:32.182893   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:32.374522   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:32.397648   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:32.670261   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:32.685779   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:32.901809   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:33.169762   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:33.184838   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:33.397320   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:33.669836   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:33.681530   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:33.896850   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:34.169041   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:34.182892   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:34.396541   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:34.669142   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:34.683175   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:34.878877   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:34.898182   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:35.169747   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:35.183483   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:35.398901   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:35.670294   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:35.685029   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:35.902939   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:36.171155   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:36.183010   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:36.398954   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:36.669195   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:36.682801   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:36.897585   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:37.168576   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:37.182987   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:37.374713   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:37.397360   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:37.668592   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:37.683096   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:37.896428   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:38.169266   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:38.183407   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:38.396482   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:38.670980   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:38.690197   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:38.896964   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:39.170158   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:39.183345   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:39.407490   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:39.669563   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:39.682556   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:39.874581   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:39.897965   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:40.169903   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:40.183693   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:40.397585   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:40.669944   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:40.698528   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:41.329496   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:41.330614   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:41.331037   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:41.398345   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:41.669869   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:41.682009   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:41.876524   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:41.900286   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:42.169633   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:42.183277   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:42.397178   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:42.669713   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:42.683223   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:42.897441   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:43.169982   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:43.182572   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:43.398170   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:43.670150   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:43.682336   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:43.896982   18990 kapi.go:107] duration metric: took 59.504499728s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0829 18:57:44.169788   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:44.181970   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:44.374399   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:44.670424   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:44.683646   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:45.169286   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:45.182897   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:45.669754   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:45.683182   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:46.170200   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:46.182590   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:46.669378   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:46.682597   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:46.873706   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:47.169378   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:47.183205   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:47.669917   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:47.681862   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:48.170226   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:48.182041   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:48.668676   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:48.682964   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:48.875193   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:49.179977   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:49.188747   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:49.669429   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:49.682463   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:50.169368   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:50.183100   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:50.669811   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:50.683376   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:51.169326   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:51.182850   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:51.373942   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:52.006081   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:52.006844   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:52.170628   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:52.181892   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:52.669274   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:52.682776   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:53.169297   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:53.183257   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:53.374184   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:53.670600   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:53.682938   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:54.170077   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:54.182248   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:54.670362   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:54.682906   18990 kapi.go:107] duration metric: took 1m12.004398431s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0829 18:57:55.191112   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:55.678304   18990 kapi.go:107] duration metric: took 1m9.512783124s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0829 18:57:55.680462   18990 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-344587 cluster.
	I0829 18:57:55.681796   18990 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0829 18:57:55.683065   18990 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0829 18:57:55.684301   18990 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, cloud-spanner, ingress-dns, inspektor-gadget, helm-tiller, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0829 18:57:55.685410   18990 addons.go:510] duration metric: took 1m22.026796458s for enable addons: enabled=[nvidia-device-plugin default-storageclass cloud-spanner ingress-dns inspektor-gadget helm-tiller storage-provisioner metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0829 18:57:55.873642   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:57.873758   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:58:00.374030   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:58:02.410643   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:58:04.873737   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:58:07.374926   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:58:09.873926   18990 pod_ready.go:93] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"True"
	I0829 18:58:09.873948   18990 pod_ready.go:82] duration metric: took 1m18.506392284s for pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace to be "Ready" ...
	I0829 18:58:09.873961   18990 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-z559z" in "kube-system" namespace to be "Ready" ...
	I0829 18:58:09.879351   18990 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-z559z" in "kube-system" namespace has status "Ready":"True"
	I0829 18:58:09.879368   18990 pod_ready.go:82] duration metric: took 5.400164ms for pod "nvidia-device-plugin-daemonset-z559z" in "kube-system" namespace to be "Ready" ...
	I0829 18:58:09.879384   18990 pod_ready.go:39] duration metric: took 1m27.157397179s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:58:09.879399   18990 api_server.go:52] waiting for apiserver process to appear ...
	I0829 18:58:09.879429   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 18:58:09.879478   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 18:58:09.930035   18990 cri.go:89] found id: "ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24"
	I0829 18:58:09.930059   18990 cri.go:89] found id: ""
	I0829 18:58:09.930070   18990 logs.go:276] 1 containers: [ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24]
	I0829 18:58:09.930131   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:09.934705   18990 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 18:58:09.934774   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 18:58:09.974110   18990 cri.go:89] found id: "3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459"
	I0829 18:58:09.974133   18990 cri.go:89] found id: ""
	I0829 18:58:09.974142   18990 logs.go:276] 1 containers: [3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459]
	I0829 18:58:09.974198   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:09.978660   18990 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 18:58:09.978721   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 18:58:10.017468   18990 cri.go:89] found id: "edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1"
	I0829 18:58:10.017489   18990 cri.go:89] found id: ""
	I0829 18:58:10.017499   18990 logs.go:276] 1 containers: [edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1]
	I0829 18:58:10.017546   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:10.022568   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 18:58:10.022633   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 18:58:10.066173   18990 cri.go:89] found id: "46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef"
	I0829 18:58:10.066193   18990 cri.go:89] found id: ""
	I0829 18:58:10.066200   18990 logs.go:276] 1 containers: [46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef]
	I0829 18:58:10.066254   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:10.071876   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 18:58:10.071927   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 18:58:10.113139   18990 cri.go:89] found id: "e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565"
	I0829 18:58:10.113158   18990 cri.go:89] found id: ""
	I0829 18:58:10.113164   18990 logs.go:276] 1 containers: [e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565]
	I0829 18:58:10.113210   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:10.117643   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 18:58:10.117707   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 18:58:10.173282   18990 cri.go:89] found id: "79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24"
	I0829 18:58:10.173301   18990 cri.go:89] found id: ""
	I0829 18:58:10.173308   18990 logs.go:276] 1 containers: [79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24]
	I0829 18:58:10.173350   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:10.177760   18990 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 18:58:10.177826   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 18:58:10.219010   18990 cri.go:89] found id: ""
	I0829 18:58:10.219040   18990 logs.go:276] 0 containers: []
	W0829 18:58:10.219050   18990 logs.go:278] No container was found matching "kindnet"
	I0829 18:58:10.219062   18990 logs.go:123] Gathering logs for kube-apiserver [ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24] ...
	I0829 18:58:10.219078   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24"
	I0829 18:58:10.277241   18990 logs.go:123] Gathering logs for kube-proxy [e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565] ...
	I0829 18:58:10.277270   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565"
	I0829 18:58:10.323859   18990 logs.go:123] Gathering logs for kube-controller-manager [79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24] ...
	I0829 18:58:10.323886   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24"
	I0829 18:58:10.385553   18990 logs.go:123] Gathering logs for container status ...
	I0829 18:58:10.385580   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 18:58:10.435083   18990 logs.go:123] Gathering logs for CRI-O ...
	I0829 18:58:10.435110   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 18:58:11.402687   18990 logs.go:123] Gathering logs for kubelet ...
	I0829 18:58:11.402729   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 18:58:11.453024   18990 logs.go:138] Found kubelet problem: Aug 29 18:56:33 addons-344587 kubelet[1210]: W0829 18:56:33.791012    1210 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-344587" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-344587' and this object
	W0829 18:58:11.453296   18990 logs.go:138] Found kubelet problem: Aug 29 18:56:33 addons-344587 kubelet[1210]: E0829 18:56:33.791070    1210 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-344587\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-344587' and this object" logger="UnhandledError"
	I0829 18:58:11.488836   18990 logs.go:123] Gathering logs for dmesg ...
	I0829 18:58:11.488870   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 18:58:11.504148   18990 logs.go:123] Gathering logs for describe nodes ...
	I0829 18:58:11.504172   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 18:58:11.643790   18990 logs.go:123] Gathering logs for etcd [3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459] ...
	I0829 18:58:11.643818   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459"
	I0829 18:58:11.726389   18990 logs.go:123] Gathering logs for coredns [edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1] ...
	I0829 18:58:11.726425   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1"
	I0829 18:58:11.766070   18990 logs.go:123] Gathering logs for kube-scheduler [46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef] ...
	I0829 18:58:11.766094   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef"
	I0829 18:58:11.811796   18990 out.go:358] Setting ErrFile to fd 2...
	I0829 18:58:11.811817   18990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 18:58:11.811865   18990 out.go:270] X Problems detected in kubelet:
	W0829 18:58:11.811879   18990 out.go:270]   Aug 29 18:56:33 addons-344587 kubelet[1210]: W0829 18:56:33.791012    1210 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-344587" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-344587' and this object
	W0829 18:58:11.811890   18990 out.go:270]   Aug 29 18:56:33 addons-344587 kubelet[1210]: E0829 18:56:33.791070    1210 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-344587\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-344587' and this object" logger="UnhandledError"
	I0829 18:58:11.811902   18990 out.go:358] Setting ErrFile to fd 2...
	I0829 18:58:11.811911   18990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:58:21.813233   18990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:58:21.832761   18990 api_server.go:72] duration metric: took 1m48.174160591s to wait for apiserver process to appear ...
	I0829 18:58:21.832788   18990 api_server.go:88] waiting for apiserver healthz status ...
	I0829 18:58:21.832817   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 18:58:21.832862   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 18:58:21.873058   18990 cri.go:89] found id: "ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24"
	I0829 18:58:21.873083   18990 cri.go:89] found id: ""
	I0829 18:58:21.873093   18990 logs.go:276] 1 containers: [ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24]
	I0829 18:58:21.873154   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:21.877320   18990 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 18:58:21.877374   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 18:58:21.916655   18990 cri.go:89] found id: "3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459"
	I0829 18:58:21.916684   18990 cri.go:89] found id: ""
	I0829 18:58:21.916692   18990 logs.go:276] 1 containers: [3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459]
	I0829 18:58:21.916736   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:21.920999   18990 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 18:58:21.921045   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 18:58:21.965578   18990 cri.go:89] found id: "edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1"
	I0829 18:58:21.965606   18990 cri.go:89] found id: ""
	I0829 18:58:21.965615   18990 logs.go:276] 1 containers: [edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1]
	I0829 18:58:21.965669   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:21.969756   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 18:58:21.969822   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 18:58:22.017458   18990 cri.go:89] found id: "46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef"
	I0829 18:58:22.017480   18990 cri.go:89] found id: ""
	I0829 18:58:22.017491   18990 logs.go:276] 1 containers: [46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef]
	I0829 18:58:22.017549   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:22.021887   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 18:58:22.021956   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 18:58:22.059660   18990 cri.go:89] found id: "e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565"
	I0829 18:58:22.059684   18990 cri.go:89] found id: ""
	I0829 18:58:22.059693   18990 logs.go:276] 1 containers: [e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565]
	I0829 18:58:22.059748   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:22.063706   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 18:58:22.063759   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 18:58:22.099570   18990 cri.go:89] found id: "79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24"
	I0829 18:58:22.099596   18990 cri.go:89] found id: ""
	I0829 18:58:22.099606   18990 logs.go:276] 1 containers: [79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24]
	I0829 18:58:22.099660   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:22.103920   18990 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 18:58:22.103979   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 18:58:22.140807   18990 cri.go:89] found id: ""
	I0829 18:58:22.140837   18990 logs.go:276] 0 containers: []
	W0829 18:58:22.140849   18990 logs.go:278] No container was found matching "kindnet"
	I0829 18:58:22.140860   18990 logs.go:123] Gathering logs for kube-controller-manager [79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24] ...
	I0829 18:58:22.140874   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24"
	I0829 18:58:22.204452   18990 logs.go:123] Gathering logs for CRI-O ...
	I0829 18:58:22.204483   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 18:58:23.279114   18990 logs.go:123] Gathering logs for describe nodes ...
	I0829 18:58:23.279161   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 18:58:23.396916   18990 logs.go:123] Gathering logs for kube-apiserver [ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24] ...
	I0829 18:58:23.396950   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24"
	I0829 18:58:23.445310   18990 logs.go:123] Gathering logs for etcd [3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459] ...
	I0829 18:58:23.445352   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459"
	I0829 18:58:23.513636   18990 logs.go:123] Gathering logs for coredns [edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1] ...
	I0829 18:58:23.513664   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1"
	I0829 18:58:23.554990   18990 logs.go:123] Gathering logs for kube-scheduler [46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef] ...
	I0829 18:58:23.555020   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef"
	I0829 18:58:23.601432   18990 logs.go:123] Gathering logs for kube-proxy [e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565] ...
	I0829 18:58:23.601464   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565"
	I0829 18:58:23.639619   18990 logs.go:123] Gathering logs for kubelet ...
	I0829 18:58:23.639647   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 18:58:23.690102   18990 logs.go:138] Found kubelet problem: Aug 29 18:56:33 addons-344587 kubelet[1210]: W0829 18:56:33.791012    1210 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-344587" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-344587' and this object
	W0829 18:58:23.690271   18990 logs.go:138] Found kubelet problem: Aug 29 18:56:33 addons-344587 kubelet[1210]: E0829 18:56:33.791070    1210 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-344587\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-344587' and this object" logger="UnhandledError"
	I0829 18:58:23.728666   18990 logs.go:123] Gathering logs for dmesg ...
	I0829 18:58:23.728701   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 18:58:23.743456   18990 logs.go:123] Gathering logs for container status ...
	I0829 18:58:23.743482   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 18:58:23.796892   18990 out.go:358] Setting ErrFile to fd 2...
	I0829 18:58:23.796919   18990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 18:58:23.796981   18990 out.go:270] X Problems detected in kubelet:
	W0829 18:58:23.796994   18990 out.go:270]   Aug 29 18:56:33 addons-344587 kubelet[1210]: W0829 18:56:33.791012    1210 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-344587" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-344587' and this object
	W0829 18:58:23.797004   18990 out.go:270]   Aug 29 18:56:33 addons-344587 kubelet[1210]: E0829 18:56:33.791070    1210 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-344587\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-344587' and this object" logger="UnhandledError"
	I0829 18:58:23.797016   18990 out.go:358] Setting ErrFile to fd 2...
	I0829 18:58:23.797026   18990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:58:33.797922   18990 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I0829 18:58:33.802928   18990 api_server.go:279] https://192.168.39.172:8443/healthz returned 200:
	ok
	I0829 18:58:33.803830   18990 api_server.go:141] control plane version: v1.31.0
	I0829 18:58:33.803850   18990 api_server.go:131] duration metric: took 11.971056831s to wait for apiserver health ...
	I0829 18:58:33.803858   18990 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 18:58:33.803876   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 18:58:33.803917   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 18:58:33.854225   18990 cri.go:89] found id: "ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24"
	I0829 18:58:33.854244   18990 cri.go:89] found id: ""
	I0829 18:58:33.854250   18990 logs.go:276] 1 containers: [ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24]
	I0829 18:58:33.854290   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:33.858238   18990 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 18:58:33.858286   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 18:58:33.900025   18990 cri.go:89] found id: "3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459"
	I0829 18:58:33.900045   18990 cri.go:89] found id: ""
	I0829 18:58:33.900054   18990 logs.go:276] 1 containers: [3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459]
	I0829 18:58:33.900094   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:33.904590   18990 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 18:58:33.904641   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 18:58:33.942867   18990 cri.go:89] found id: "edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1"
	I0829 18:58:33.942888   18990 cri.go:89] found id: ""
	I0829 18:58:33.942895   18990 logs.go:276] 1 containers: [edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1]
	I0829 18:58:33.942953   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:33.947338   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 18:58:33.947388   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 18:58:33.991266   18990 cri.go:89] found id: "46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef"
	I0829 18:58:33.991285   18990 cri.go:89] found id: ""
	I0829 18:58:33.991292   18990 logs.go:276] 1 containers: [46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef]
	I0829 18:58:33.991334   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:33.995550   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 18:58:33.995601   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 18:58:34.034277   18990 cri.go:89] found id: "e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565"
	I0829 18:58:34.034294   18990 cri.go:89] found id: ""
	I0829 18:58:34.034302   18990 logs.go:276] 1 containers: [e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565]
	I0829 18:58:34.034341   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:34.038466   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 18:58:34.038546   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 18:58:34.078562   18990 cri.go:89] found id: "79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24"
	I0829 18:58:34.078579   18990 cri.go:89] found id: ""
	I0829 18:58:34.078586   18990 logs.go:276] 1 containers: [79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24]
	I0829 18:58:34.078630   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:34.083366   18990 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 18:58:34.083423   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 18:58:34.145061   18990 cri.go:89] found id: ""
	I0829 18:58:34.145090   18990 logs.go:276] 0 containers: []
	W0829 18:58:34.145099   18990 logs.go:278] No container was found matching "kindnet"
	I0829 18:58:34.145106   18990 logs.go:123] Gathering logs for kubelet ...
	I0829 18:58:34.145117   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 18:58:34.193492   18990 logs.go:138] Found kubelet problem: Aug 29 18:56:33 addons-344587 kubelet[1210]: W0829 18:56:33.791012    1210 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-344587" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-344587' and this object
	W0829 18:58:34.193696   18990 logs.go:138] Found kubelet problem: Aug 29 18:56:33 addons-344587 kubelet[1210]: E0829 18:56:33.791070    1210 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-344587\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-344587' and this object" logger="UnhandledError"
	I0829 18:58:34.230073   18990 logs.go:123] Gathering logs for kube-scheduler [46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef] ...
	I0829 18:58:34.230109   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef"
	I0829 18:58:34.281725   18990 logs.go:123] Gathering logs for kube-proxy [e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565] ...
	I0829 18:58:34.281758   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565"
	I0829 18:58:34.325201   18990 logs.go:123] Gathering logs for container status ...
	I0829 18:58:34.325228   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 18:58:34.371370   18990 logs.go:123] Gathering logs for CRI-O ...
	I0829 18:58:34.371400   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 18:58:35.159659   18990 logs.go:123] Gathering logs for dmesg ...
	I0829 18:58:35.159722   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 18:58:35.175376   18990 logs.go:123] Gathering logs for describe nodes ...
	I0829 18:58:35.175403   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 18:58:35.302779   18990 logs.go:123] Gathering logs for kube-apiserver [ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24] ...
	I0829 18:58:35.302810   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24"
	I0829 18:58:35.362682   18990 logs.go:123] Gathering logs for etcd [3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459] ...
	I0829 18:58:35.362711   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459"
	I0829 18:58:35.435174   18990 logs.go:123] Gathering logs for coredns [edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1] ...
	I0829 18:58:35.435207   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1"
	I0829 18:58:35.475282   18990 logs.go:123] Gathering logs for kube-controller-manager [79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24] ...
	I0829 18:58:35.475310   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24"
	I0829 18:58:35.539640   18990 out.go:358] Setting ErrFile to fd 2...
	I0829 18:58:35.539666   18990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 18:58:35.539716   18990 out.go:270] X Problems detected in kubelet:
	W0829 18:58:35.539724   18990 out.go:270]   Aug 29 18:56:33 addons-344587 kubelet[1210]: W0829 18:56:33.791012    1210 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-344587" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-344587' and this object
	W0829 18:58:35.539735   18990 out.go:270]   Aug 29 18:56:33 addons-344587 kubelet[1210]: E0829 18:56:33.791070    1210 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-344587\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-344587' and this object" logger="UnhandledError"
	I0829 18:58:35.539748   18990 out.go:358] Setting ErrFile to fd 2...
	I0829 18:58:35.539754   18990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:58:45.550232   18990 system_pods.go:59] 18 kube-system pods found
	I0829 18:58:45.550261   18990 system_pods.go:61] "coredns-6f6b679f8f-t9nhw" [01782eed-98db-4768-8ab6-bd429fe58305] Running
	I0829 18:58:45.550266   18990 system_pods.go:61] "csi-hostpath-attacher-0" [318ff00f-e5be-4029-b58b-30185cb48a7f] Running
	I0829 18:58:45.550269   18990 system_pods.go:61] "csi-hostpath-resizer-0" [ba8fc44d-cd38-469f-8d42-7aedd5d81a06] Running
	I0829 18:58:45.550272   18990 system_pods.go:61] "csi-hostpathplugin-96vz6" [207fbe26-1d1e-48c7-8bfd-4621264e0739] Running
	I0829 18:58:45.550275   18990 system_pods.go:61] "etcd-addons-344587" [332f8ecf-d239-4d45-b8c2-e023c3849b2b] Running
	I0829 18:58:45.550278   18990 system_pods.go:61] "kube-apiserver-addons-344587" [cec380f4-ded8-4496-b6c5-54ebeeecb720] Running
	I0829 18:58:45.550281   18990 system_pods.go:61] "kube-controller-manager-addons-344587" [4812d16d-522f-44e2-b353-798732857218] Running
	I0829 18:58:45.550284   18990 system_pods.go:61] "kube-ingress-dns-minikube" [2aeaeabc-ac3f-4f8a-88ee-84fe5d623dd6] Running
	I0829 18:58:45.550286   18990 system_pods.go:61] "kube-proxy-lgcxw" [0be1dddc-793d-471e-aa16-9752951fb72a] Running
	I0829 18:58:45.550289   18990 system_pods.go:61] "kube-scheduler-addons-344587" [c36a46ec-4466-46f5-ba95-40110040eb06] Running
	I0829 18:58:45.550291   18990 system_pods.go:61] "metrics-server-8988944d9-9tplt" [427d61c8-9ff3-4718-9faf-896d20af6cdc] Running
	I0829 18:58:45.550295   18990 system_pods.go:61] "nvidia-device-plugin-daemonset-z559z" [f30c9660-ea3d-40c2-9842-bcf8bb18c0b6] Running
	I0829 18:58:45.550297   18990 system_pods.go:61] "registry-6fb4cdfc84-dmlc6" [074412f0-2988-4497-a2bb-abd86ddc18ab] Running
	I0829 18:58:45.550300   18990 system_pods.go:61] "registry-proxy-x5bqm" [45f795aa-aca5-41b5-a455-89b285ce9531] Running
	I0829 18:58:45.550303   18990 system_pods.go:61] "snapshot-controller-56fcc65765-8fbbn" [ed961d54-d7a4-485f-bb8e-e7195ed4e80e] Running
	I0829 18:58:45.550307   18990 system_pods.go:61] "snapshot-controller-56fcc65765-gn5lq" [bf5c7495-59fd-4151-abce-7cf6072e995e] Running
	I0829 18:58:45.550309   18990 system_pods.go:61] "storage-provisioner" [14e72aaf-6cd6-4740-a9d5-e4a739fed914] Running
	I0829 18:58:45.550312   18990 system_pods.go:61] "tiller-deploy-b48cc5f79-bxws5" [d2380d68-348a-4dc1-8c40-1a4e9fa6ab04] Running
	I0829 18:58:45.550318   18990 system_pods.go:74] duration metric: took 11.746455029s to wait for pod list to return data ...
	I0829 18:58:45.550328   18990 default_sa.go:34] waiting for default service account to be created ...
	I0829 18:58:45.553072   18990 default_sa.go:45] found service account: "default"
	I0829 18:58:45.553088   18990 default_sa.go:55] duration metric: took 2.755882ms for default service account to be created ...
	I0829 18:58:45.553095   18990 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 18:58:45.559715   18990 system_pods.go:86] 18 kube-system pods found
	I0829 18:58:45.559734   18990 system_pods.go:89] "coredns-6f6b679f8f-t9nhw" [01782eed-98db-4768-8ab6-bd429fe58305] Running
	I0829 18:58:45.559740   18990 system_pods.go:89] "csi-hostpath-attacher-0" [318ff00f-e5be-4029-b58b-30185cb48a7f] Running
	I0829 18:58:45.559744   18990 system_pods.go:89] "csi-hostpath-resizer-0" [ba8fc44d-cd38-469f-8d42-7aedd5d81a06] Running
	I0829 18:58:45.559748   18990 system_pods.go:89] "csi-hostpathplugin-96vz6" [207fbe26-1d1e-48c7-8bfd-4621264e0739] Running
	I0829 18:58:45.559751   18990 system_pods.go:89] "etcd-addons-344587" [332f8ecf-d239-4d45-b8c2-e023c3849b2b] Running
	I0829 18:58:45.559756   18990 system_pods.go:89] "kube-apiserver-addons-344587" [cec380f4-ded8-4496-b6c5-54ebeeecb720] Running
	I0829 18:58:45.559760   18990 system_pods.go:89] "kube-controller-manager-addons-344587" [4812d16d-522f-44e2-b353-798732857218] Running
	I0829 18:58:45.559764   18990 system_pods.go:89] "kube-ingress-dns-minikube" [2aeaeabc-ac3f-4f8a-88ee-84fe5d623dd6] Running
	I0829 18:58:45.559767   18990 system_pods.go:89] "kube-proxy-lgcxw" [0be1dddc-793d-471e-aa16-9752951fb72a] Running
	I0829 18:58:45.559771   18990 system_pods.go:89] "kube-scheduler-addons-344587" [c36a46ec-4466-46f5-ba95-40110040eb06] Running
	I0829 18:58:45.559774   18990 system_pods.go:89] "metrics-server-8988944d9-9tplt" [427d61c8-9ff3-4718-9faf-896d20af6cdc] Running
	I0829 18:58:45.559778   18990 system_pods.go:89] "nvidia-device-plugin-daemonset-z559z" [f30c9660-ea3d-40c2-9842-bcf8bb18c0b6] Running
	I0829 18:58:45.559781   18990 system_pods.go:89] "registry-6fb4cdfc84-dmlc6" [074412f0-2988-4497-a2bb-abd86ddc18ab] Running
	I0829 18:58:45.559785   18990 system_pods.go:89] "registry-proxy-x5bqm" [45f795aa-aca5-41b5-a455-89b285ce9531] Running
	I0829 18:58:45.559791   18990 system_pods.go:89] "snapshot-controller-56fcc65765-8fbbn" [ed961d54-d7a4-485f-bb8e-e7195ed4e80e] Running
	I0829 18:58:45.559794   18990 system_pods.go:89] "snapshot-controller-56fcc65765-gn5lq" [bf5c7495-59fd-4151-abce-7cf6072e995e] Running
	I0829 18:58:45.559797   18990 system_pods.go:89] "storage-provisioner" [14e72aaf-6cd6-4740-a9d5-e4a739fed914] Running
	I0829 18:58:45.559801   18990 system_pods.go:89] "tiller-deploy-b48cc5f79-bxws5" [d2380d68-348a-4dc1-8c40-1a4e9fa6ab04] Running
	I0829 18:58:45.559806   18990 system_pods.go:126] duration metric: took 6.706766ms to wait for k8s-apps to be running ...
	I0829 18:58:45.559815   18990 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 18:58:45.559853   18990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:58:45.577199   18990 system_svc.go:56] duration metric: took 17.376357ms WaitForService to wait for kubelet
	I0829 18:58:45.577228   18990 kubeadm.go:582] duration metric: took 2m11.91863045s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:58:45.577249   18990 node_conditions.go:102] verifying NodePressure condition ...
	I0829 18:58:45.580335   18990 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 18:58:45.580362   18990 node_conditions.go:123] node cpu capacity is 2
	I0829 18:58:45.580377   18990 node_conditions.go:105] duration metric: took 3.122527ms to run NodePressure ...
	I0829 18:58:45.580391   18990 start.go:241] waiting for startup goroutines ...
	I0829 18:58:45.580403   18990 start.go:246] waiting for cluster config update ...
	I0829 18:58:45.580427   18990 start.go:255] writing updated cluster config ...
	I0829 18:58:45.580716   18990 ssh_runner.go:195] Run: rm -f paused
	I0829 18:58:45.628072   18990 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 18:58:45.630291   18990 out.go:177] * Done! kubectl is now configured to use "addons-344587" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 29 19:10:34 addons-344587 crio[658]: time="2024-08-29 19:10:34.495111878Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958634495081979,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0a8c4b39-0466-45a0-8ee6-9a703ca011fd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:10:34 addons-344587 crio[658]: time="2024-08-29 19:10:34.495776710Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f6c469f-5181-4831-950f-49901b71f73d name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:10:34 addons-344587 crio[658]: time="2024-08-29 19:10:34.495844514Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f6c469f-5181-4831-950f-49901b71f73d name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:10:34 addons-344587 crio[658]: time="2024-08-29 19:10:34.496088983Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5cd8313ff37bae72eced3f03f75a5b3dda5256094e6c9637614a07540de04d6b,PodSandboxId:31a4794e76c5a6d4324d26b7e11b8620a4b1fbb349a6204ccff8f657aac6e767,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724958626537636688,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-4lddf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1287ae1-fe54-458a-97b2-472886127905,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89ab3a824e7e135eccb2feebf6514044e173d635da6a72ba4865fa34b9d7b554,PodSandboxId:9f57bccc4a62cb14a1b0d1f9d9e7d3383fc17afb7d58885da58d33d9aaba7e6b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724958486293343065,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 315329e4-6bc2-4164-a37f-2d9d7857eba1,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c8e7bcbfebef995192df5d64f603c9d08c91b1a799ccb037398d00858d33ba,PodSandboxId:fe391f299e153d6c0972dbdf457c92e1ad975a4e3574b15f1ba653832b929f55,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724957874945557865,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-m8795,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 3289f49d-61c4-4693-818b-5a8e73d95410,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54164e21bd9a6b72fe31085cb8cbe33df2bbc8670c01312f597d476f78b5d1c,PodSandboxId:05bebeb94a32a8d55a95ed5362201cbf7e480ead93a4fbbbeeee251aac433a08,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724957825221794572,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-8988944d9-9tplt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427d61c8-9ff3-4718-9faf-896d20af6cdc,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a0a245e481d700614dae7bfc6d7d4bbe9f074615c88e1373f761abb63e08cc,PodSandboxId:caa68615ea5869f55238af371cf096755b420e33837f56a5391355cc5270a453,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1724957800386945011,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e72aaf-6cd6-4740-a9d5-e4a739fed914,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1,PodSandboxId:78300c884569d8c916441667591ef1a6cdfdc769a616ccf684cd3aeb6ee173a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:17249577979
14552214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t9nhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01782eed-98db-4768-8ab6-bd429fe58305,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565,PodSandboxId:2f0f516b497dee578fa502336c368d6a64299cb73064fc27e94ba33dcd0c9623,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560
a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724957793494579868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lgcxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0be1dddc-793d-471e-aa16-9752951fb72a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef,PodSandboxId:bc4dfc643a4f95574b0c4d436e0af5e463ff66fcda536cf28cb9fd9980b55ae3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724957783507566907,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 433e7620b7da2027dd73dc3bddb2f997,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24,PodSandboxId:f9feaafb78b8d9b6c588088d4158b1b980da872d4034937746daf0b1ac5998c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724957783501034123,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df35d1b995ba56a5ea532995ddbeb880,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459,PodSandboxId:5b90ade16a1ec3848984ac4ccc079dbc33282af9bb7be159d955ed993979dd7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSp
ecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724957783509496209,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3105eec1aeed59eaefbe2b389301917,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24,PodSandboxId:948b38ffb05be778e3450b635de34556972385a53111e037a04d34383f52c377,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724957783431784942,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba40dc401b574376e18cc2969d87be7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f6c469f-5181-4831-950f-49901b71f73d name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:10:34 addons-344587 crio[658]: time="2024-08-29 19:10:34.534889371Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1952e165-8ce0-4e98-8c6c-c99e2fd55bf6 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:10:34 addons-344587 crio[658]: time="2024-08-29 19:10:34.535012301Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1952e165-8ce0-4e98-8c6c-c99e2fd55bf6 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:10:34 addons-344587 crio[658]: time="2024-08-29 19:10:34.535960685Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fb1ddcdc-3ab8-4784-b7f6-37bdfc666631 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:10:34 addons-344587 crio[658]: time="2024-08-29 19:10:34.537521774Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958634537494165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fb1ddcdc-3ab8-4784-b7f6-37bdfc666631 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:10:34 addons-344587 crio[658]: time="2024-08-29 19:10:34.537995789Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=220530ff-9a6c-46e0-b0fc-a1a9581f45c9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:10:34 addons-344587 crio[658]: time="2024-08-29 19:10:34.538066197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=220530ff-9a6c-46e0-b0fc-a1a9581f45c9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:10:34 addons-344587 crio[658]: time="2024-08-29 19:10:34.538322622Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5cd8313ff37bae72eced3f03f75a5b3dda5256094e6c9637614a07540de04d6b,PodSandboxId:31a4794e76c5a6d4324d26b7e11b8620a4b1fbb349a6204ccff8f657aac6e767,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724958626537636688,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-4lddf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1287ae1-fe54-458a-97b2-472886127905,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89ab3a824e7e135eccb2feebf6514044e173d635da6a72ba4865fa34b9d7b554,PodSandboxId:9f57bccc4a62cb14a1b0d1f9d9e7d3383fc17afb7d58885da58d33d9aaba7e6b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724958486293343065,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 315329e4-6bc2-4164-a37f-2d9d7857eba1,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c8e7bcbfebef995192df5d64f603c9d08c91b1a799ccb037398d00858d33ba,PodSandboxId:fe391f299e153d6c0972dbdf457c92e1ad975a4e3574b15f1ba653832b929f55,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724957874945557865,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-m8795,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 3289f49d-61c4-4693-818b-5a8e73d95410,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54164e21bd9a6b72fe31085cb8cbe33df2bbc8670c01312f597d476f78b5d1c,PodSandboxId:05bebeb94a32a8d55a95ed5362201cbf7e480ead93a4fbbbeeee251aac433a08,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724957825221794572,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-8988944d9-9tplt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427d61c8-9ff3-4718-9faf-896d20af6cdc,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a0a245e481d700614dae7bfc6d7d4bbe9f074615c88e1373f761abb63e08cc,PodSandboxId:caa68615ea5869f55238af371cf096755b420e33837f56a5391355cc5270a453,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1724957800386945011,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e72aaf-6cd6-4740-a9d5-e4a739fed914,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1,PodSandboxId:78300c884569d8c916441667591ef1a6cdfdc769a616ccf684cd3aeb6ee173a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:17249577979
14552214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t9nhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01782eed-98db-4768-8ab6-bd429fe58305,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565,PodSandboxId:2f0f516b497dee578fa502336c368d6a64299cb73064fc27e94ba33dcd0c9623,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560
a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724957793494579868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lgcxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0be1dddc-793d-471e-aa16-9752951fb72a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef,PodSandboxId:bc4dfc643a4f95574b0c4d436e0af5e463ff66fcda536cf28cb9fd9980b55ae3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724957783507566907,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 433e7620b7da2027dd73dc3bddb2f997,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24,PodSandboxId:f9feaafb78b8d9b6c588088d4158b1b980da872d4034937746daf0b1ac5998c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724957783501034123,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df35d1b995ba56a5ea532995ddbeb880,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459,PodSandboxId:5b90ade16a1ec3848984ac4ccc079dbc33282af9bb7be159d955ed993979dd7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSp
ecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724957783509496209,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3105eec1aeed59eaefbe2b389301917,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24,PodSandboxId:948b38ffb05be778e3450b635de34556972385a53111e037a04d34383f52c377,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724957783431784942,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba40dc401b574376e18cc2969d87be7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=220530ff-9a6c-46e0-b0fc-a1a9581f45c9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:10:34 addons-344587 crio[658]: time="2024-08-29 19:10:34.575451858Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5d387ab6-9e87-4588-aacd-b0bd086353ca name=/runtime.v1.RuntimeService/Version
	Aug 29 19:10:34 addons-344587 crio[658]: time="2024-08-29 19:10:34.575526755Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5d387ab6-9e87-4588-aacd-b0bd086353ca name=/runtime.v1.RuntimeService/Version
	Aug 29 19:10:34 addons-344587 crio[658]: time="2024-08-29 19:10:34.576769625Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e31a495c-c4be-404c-a3e8-f8bcaae26b6c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:10:34 addons-344587 crio[658]: time="2024-08-29 19:10:34.578047161Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958634578018006,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e31a495c-c4be-404c-a3e8-f8bcaae26b6c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:10:34 addons-344587 crio[658]: time="2024-08-29 19:10:34.578713410Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e5f3bc3-ee3d-4bfe-b704-e929e651a816 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:10:34 addons-344587 crio[658]: time="2024-08-29 19:10:34.578790103Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e5f3bc3-ee3d-4bfe-b704-e929e651a816 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:10:34 addons-344587 crio[658]: time="2024-08-29 19:10:34.579117720Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5cd8313ff37bae72eced3f03f75a5b3dda5256094e6c9637614a07540de04d6b,PodSandboxId:31a4794e76c5a6d4324d26b7e11b8620a4b1fbb349a6204ccff8f657aac6e767,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724958626537636688,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-4lddf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1287ae1-fe54-458a-97b2-472886127905,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89ab3a824e7e135eccb2feebf6514044e173d635da6a72ba4865fa34b9d7b554,PodSandboxId:9f57bccc4a62cb14a1b0d1f9d9e7d3383fc17afb7d58885da58d33d9aaba7e6b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724958486293343065,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 315329e4-6bc2-4164-a37f-2d9d7857eba1,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c8e7bcbfebef995192df5d64f603c9d08c91b1a799ccb037398d00858d33ba,PodSandboxId:fe391f299e153d6c0972dbdf457c92e1ad975a4e3574b15f1ba653832b929f55,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724957874945557865,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-m8795,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 3289f49d-61c4-4693-818b-5a8e73d95410,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54164e21bd9a6b72fe31085cb8cbe33df2bbc8670c01312f597d476f78b5d1c,PodSandboxId:05bebeb94a32a8d55a95ed5362201cbf7e480ead93a4fbbbeeee251aac433a08,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724957825221794572,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-8988944d9-9tplt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427d61c8-9ff3-4718-9faf-896d20af6cdc,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a0a245e481d700614dae7bfc6d7d4bbe9f074615c88e1373f761abb63e08cc,PodSandboxId:caa68615ea5869f55238af371cf096755b420e33837f56a5391355cc5270a453,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1724957800386945011,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e72aaf-6cd6-4740-a9d5-e4a739fed914,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1,PodSandboxId:78300c884569d8c916441667591ef1a6cdfdc769a616ccf684cd3aeb6ee173a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:17249577979
14552214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t9nhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01782eed-98db-4768-8ab6-bd429fe58305,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565,PodSandboxId:2f0f516b497dee578fa502336c368d6a64299cb73064fc27e94ba33dcd0c9623,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560
a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724957793494579868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lgcxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0be1dddc-793d-471e-aa16-9752951fb72a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef,PodSandboxId:bc4dfc643a4f95574b0c4d436e0af5e463ff66fcda536cf28cb9fd9980b55ae3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724957783507566907,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 433e7620b7da2027dd73dc3bddb2f997,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24,PodSandboxId:f9feaafb78b8d9b6c588088d4158b1b980da872d4034937746daf0b1ac5998c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724957783501034123,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df35d1b995ba56a5ea532995ddbeb880,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459,PodSandboxId:5b90ade16a1ec3848984ac4ccc079dbc33282af9bb7be159d955ed993979dd7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSp
ecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724957783509496209,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3105eec1aeed59eaefbe2b389301917,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24,PodSandboxId:948b38ffb05be778e3450b635de34556972385a53111e037a04d34383f52c377,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724957783431784942,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba40dc401b574376e18cc2969d87be7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e5f3bc3-ee3d-4bfe-b704-e929e651a816 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:10:34 addons-344587 crio[658]: time="2024-08-29 19:10:34.617213595Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f670f57f-8777-4b63-8c44-d15ba9a220ba name=/runtime.v1.RuntimeService/Version
	Aug 29 19:10:34 addons-344587 crio[658]: time="2024-08-29 19:10:34.617286920Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f670f57f-8777-4b63-8c44-d15ba9a220ba name=/runtime.v1.RuntimeService/Version
	Aug 29 19:10:34 addons-344587 crio[658]: time="2024-08-29 19:10:34.618287620Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0eec0c7a-963a-4d68-a610-f8be13f75906 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:10:34 addons-344587 crio[658]: time="2024-08-29 19:10:34.619982490Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958634619929963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0eec0c7a-963a-4d68-a610-f8be13f75906 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:10:34 addons-344587 crio[658]: time="2024-08-29 19:10:34.624121878Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a8a0b31-bfbd-4c41-8205-dcf46d738a2a name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:10:34 addons-344587 crio[658]: time="2024-08-29 19:10:34.624246687Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a8a0b31-bfbd-4c41-8205-dcf46d738a2a name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:10:34 addons-344587 crio[658]: time="2024-08-29 19:10:34.624830074Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5cd8313ff37bae72eced3f03f75a5b3dda5256094e6c9637614a07540de04d6b,PodSandboxId:31a4794e76c5a6d4324d26b7e11b8620a4b1fbb349a6204ccff8f657aac6e767,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724958626537636688,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-4lddf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1287ae1-fe54-458a-97b2-472886127905,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89ab3a824e7e135eccb2feebf6514044e173d635da6a72ba4865fa34b9d7b554,PodSandboxId:9f57bccc4a62cb14a1b0d1f9d9e7d3383fc17afb7d58885da58d33d9aaba7e6b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724958486293343065,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 315329e4-6bc2-4164-a37f-2d9d7857eba1,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c8e7bcbfebef995192df5d64f603c9d08c91b1a799ccb037398d00858d33ba,PodSandboxId:fe391f299e153d6c0972dbdf457c92e1ad975a4e3574b15f1ba653832b929f55,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724957874945557865,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-m8795,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 3289f49d-61c4-4693-818b-5a8e73d95410,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54164e21bd9a6b72fe31085cb8cbe33df2bbc8670c01312f597d476f78b5d1c,PodSandboxId:05bebeb94a32a8d55a95ed5362201cbf7e480ead93a4fbbbeeee251aac433a08,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724957825221794572,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-8988944d9-9tplt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427d61c8-9ff3-4718-9faf-896d20af6cdc,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a0a245e481d700614dae7bfc6d7d4bbe9f074615c88e1373f761abb63e08cc,PodSandboxId:caa68615ea5869f55238af371cf096755b420e33837f56a5391355cc5270a453,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1724957800386945011,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e72aaf-6cd6-4740-a9d5-e4a739fed914,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1,PodSandboxId:78300c884569d8c916441667591ef1a6cdfdc769a616ccf684cd3aeb6ee173a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:17249577979
14552214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t9nhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01782eed-98db-4768-8ab6-bd429fe58305,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565,PodSandboxId:2f0f516b497dee578fa502336c368d6a64299cb73064fc27e94ba33dcd0c9623,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560
a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724957793494579868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lgcxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0be1dddc-793d-471e-aa16-9752951fb72a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef,PodSandboxId:bc4dfc643a4f95574b0c4d436e0af5e463ff66fcda536cf28cb9fd9980b55ae3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724957783507566907,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 433e7620b7da2027dd73dc3bddb2f997,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24,PodSandboxId:f9feaafb78b8d9b6c588088d4158b1b980da872d4034937746daf0b1ac5998c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724957783501034123,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df35d1b995ba56a5ea532995ddbeb880,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459,PodSandboxId:5b90ade16a1ec3848984ac4ccc079dbc33282af9bb7be159d955ed993979dd7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSp
ecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724957783509496209,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3105eec1aeed59eaefbe2b389301917,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24,PodSandboxId:948b38ffb05be778e3450b635de34556972385a53111e037a04d34383f52c377,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724957783431784942,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba40dc401b574376e18cc2969d87be7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a8a0b31-bfbd-4c41-8205-dcf46d738a2a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5cd8313ff37ba       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   8 seconds ago       Running             hello-world-app           0                   31a4794e76c5a       hello-world-app-55bf9c44b4-4lddf
	89ab3a824e7e1       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                         2 minutes ago       Running             nginx                     0                   9f57bccc4a62c       nginx
	e9c8e7bcbfebe       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            12 minutes ago      Running             gcp-auth                  0                   fe391f299e153       gcp-auth-89d5ffd79-m8795
	d54164e21bd9a       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   13 minutes ago      Running             metrics-server            0                   05bebeb94a32a       metrics-server-8988944d9-9tplt
	15a0a245e481d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        13 minutes ago      Running             storage-provisioner       0                   caa68615ea586       storage-provisioner
	edffa46b48365       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        13 minutes ago      Running             coredns                   0                   78300c884569d       coredns-6f6b679f8f-t9nhw
	e6b94afd2073c       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        14 minutes ago      Running             kube-proxy                0                   2f0f516b497de       kube-proxy-lgcxw
	3a9bf9036a456       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        14 minutes ago      Running             etcd                      0                   5b90ade16a1ec       etcd-addons-344587
	46ea401f11d33       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        14 minutes ago      Running             kube-scheduler            0                   bc4dfc643a4f9       kube-scheduler-addons-344587
	79990e8cc7f54       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        14 minutes ago      Running             kube-controller-manager   0                   f9feaafb78b8d       kube-controller-manager-addons-344587
	ca9198782e10b       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        14 minutes ago      Running             kube-apiserver            0                   948b38ffb05be       kube-apiserver-addons-344587
	
	
	==> coredns [edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1] <==
	[INFO] 10.244.0.7:39145 - 42946 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00005377s
	[INFO] 10.244.0.22:42016 - 45991 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000368592s
	[INFO] 10.244.0.22:34827 - 63066 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000120289s
	[INFO] 10.244.0.22:43077 - 9805 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000099235s
	[INFO] 10.244.0.22:42369 - 39774 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000079385s
	[INFO] 10.244.0.22:60024 - 29907 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114811s
	[INFO] 10.244.0.22:45308 - 1618 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000060509s
	[INFO] 10.244.0.22:58816 - 5970 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001084367s
	[INFO] 10.244.0.22:42307 - 58779 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.0009186s
	[INFO] 10.244.0.7:44744 - 64553 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000299877s
	[INFO] 10.244.0.7:44744 - 5421 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000076334s
	[INFO] 10.244.0.7:46191 - 55261 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114362s
	[INFO] 10.244.0.7:46191 - 4319 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000078476s
	[INFO] 10.244.0.7:37623 - 4000 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000088295s
	[INFO] 10.244.0.7:37623 - 54189 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000065651s
	[INFO] 10.244.0.7:37785 - 7471 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000114015s
	[INFO] 10.244.0.7:37785 - 24365 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110504s
	[INFO] 10.244.0.7:60734 - 39177 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000153016s
	[INFO] 10.244.0.7:60734 - 36925 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00004885s
	[INFO] 10.244.0.7:56476 - 49913 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091243s
	[INFO] 10.244.0.7:56476 - 39675 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000038412s
	[INFO] 10.244.0.7:52181 - 34800 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00004683s
	[INFO] 10.244.0.7:52181 - 48114 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000035557s
	[INFO] 10.244.0.7:44052 - 60911 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000130226s
	[INFO] 10.244.0.7:44052 - 20460 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000063327s
	
	
	==> describe nodes <==
	Name:               addons-344587
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-344587
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=addons-344587
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T18_56_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-344587
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:56:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-344587
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:10:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:10:33 +0000   Thu, 29 Aug 2024 18:56:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:10:33 +0000   Thu, 29 Aug 2024 18:56:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:10:33 +0000   Thu, 29 Aug 2024 18:56:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:10:33 +0000   Thu, 29 Aug 2024 18:56:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.172
	  Hostname:    addons-344587
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 260355e6785f4e7bb1e92498cafe0432
	  System UUID:                260355e6-785f-4e7b-b1e9-2498cafe0432
	  Boot ID:                    63059b99-f440-429e-a6ac-c800d57acda3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-world-app-55bf9c44b4-4lddf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  gcp-auth                    gcp-auth-89d5ffd79-m8795                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-6f6b679f8f-t9nhw                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     14m
	  kube-system                 etcd-addons-344587                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         14m
	  kube-system                 kube-apiserver-addons-344587             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-addons-344587    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-lgcxw                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-344587             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-8988944d9-9tplt           100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         13m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node addons-344587 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node addons-344587 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node addons-344587 status is now: NodeHasSufficientPID
	  Normal  NodeReady                14m   kubelet          Node addons-344587 status is now: NodeReady
	  Normal  RegisteredNode           14m   node-controller  Node addons-344587 event: Registered Node addons-344587 in Controller
	
	
	==> dmesg <==
	[  +5.090259] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.165082] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.848747] kauditd_printk_skb: 52 callbacks suppressed
	[  +5.168731] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.203781] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.023105] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.195417] kauditd_printk_skb: 3 callbacks suppressed
	[Aug29 18:58] kauditd_printk_skb: 49 callbacks suppressed
	[ +39.477849] kauditd_printk_skb: 28 callbacks suppressed
	[Aug29 18:59] kauditd_printk_skb: 2 callbacks suppressed
	[Aug29 19:00] kauditd_printk_skb: 28 callbacks suppressed
	[Aug29 19:03] kauditd_printk_skb: 28 callbacks suppressed
	[Aug29 19:06] kauditd_printk_skb: 28 callbacks suppressed
	[Aug29 19:07] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.472194] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.550952] kauditd_printk_skb: 23 callbacks suppressed
	[  +7.540533] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.552049] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.997754] kauditd_printk_skb: 58 callbacks suppressed
	[  +6.004601] kauditd_printk_skb: 36 callbacks suppressed
	[  +5.537055] kauditd_printk_skb: 12 callbacks suppressed
	[ +11.328650] kauditd_printk_skb: 11 callbacks suppressed
	[Aug29 19:08] kauditd_printk_skb: 46 callbacks suppressed
	[Aug29 19:10] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.194292] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459] <==
	{"level":"warn","ts":"2024-08-29T18:57:41.311333Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.784612ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:57:41.313345Z","caller":"traceutil/trace.go:171","msg":"trace[1914043573] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1103; }","duration":"166.776213ms","start":"2024-08-29T18:57:41.146545Z","end":"2024-08-29T18:57:41.313321Z","steps":["trace[1914043573] 'agreement among raft nodes before linearized reading'  (duration: 164.779946ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:57:49.529587Z","caller":"traceutil/trace.go:171","msg":"trace[1099537540] transaction","detail":"{read_only:false; response_revision:1125; number_of_response:1; }","duration":"158.349534ms","start":"2024-08-29T18:57:49.371035Z","end":"2024-08-29T18:57:49.529384Z","steps":["trace[1099537540] 'process raft request'  (duration: 158.202737ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:57:51.987950Z","caller":"traceutil/trace.go:171","msg":"trace[877493432] linearizableReadLoop","detail":"{readStateIndex:1161; appliedIndex:1160; }","duration":"331.031224ms","start":"2024-08-29T18:57:51.656906Z","end":"2024-08-29T18:57:51.987937Z","steps":["trace[877493432] 'read index received'  (duration: 330.742237ms)","trace[877493432] 'applied index is now lower than readState.Index'  (duration: 288.514µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-29T18:57:51.988189Z","caller":"traceutil/trace.go:171","msg":"trace[174649903] transaction","detail":"{read_only:false; response_revision:1128; number_of_response:1; }","duration":"450.012484ms","start":"2024-08-29T18:57:51.538165Z","end":"2024-08-29T18:57:51.988178Z","steps":["trace[174649903] 'process raft request'  (duration: 449.679178ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:57:51.988293Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T18:57:51.538147Z","time spent":"450.082879ms","remote":"127.0.0.1:36120","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1125 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-29T18:57:51.988443Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"331.528183ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:57:51.988481Z","caller":"traceutil/trace.go:171","msg":"trace[1935103251] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1128; }","duration":"331.569979ms","start":"2024-08-29T18:57:51.656903Z","end":"2024-08-29T18:57:51.988473Z","steps":["trace[1935103251] 'agreement among raft nodes before linearized reading'  (duration: 331.509051ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:57:51.988540Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T18:57:51.656873Z","time spent":"331.661655ms","remote":"127.0.0.1:36130","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-08-29T18:57:51.988641Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"318.4247ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:57:51.988771Z","caller":"traceutil/trace.go:171","msg":"trace[676173896] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1128; }","duration":"318.553242ms","start":"2024-08-29T18:57:51.670211Z","end":"2024-08-29T18:57:51.988764Z","steps":["trace[676173896] 'agreement among raft nodes before linearized reading'  (duration: 318.41108ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:57:51.988815Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T18:57:51.670179Z","time spent":"318.622729ms","remote":"127.0.0.1:36130","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-08-29T18:57:51.988972Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.900574ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-8988944d9-9tplt\" ","response":"range_response_count:1 size:4561"}
	{"level":"info","ts":"2024-08-29T18:57:51.989006Z","caller":"traceutil/trace.go:171","msg":"trace[1013009811] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-8988944d9-9tplt; range_end:; response_count:1; response_revision:1128; }","duration":"129.932706ms","start":"2024-08-29T18:57:51.859067Z","end":"2024-08-29T18:57:51.989000Z","steps":["trace[1013009811] 'agreement among raft nodes before linearized reading'  (duration: 129.851129ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:58:02.393148Z","caller":"traceutil/trace.go:171","msg":"trace[728866711] linearizableReadLoop","detail":"{readStateIndex:1216; appliedIndex:1215; }","duration":"245.819709ms","start":"2024-08-29T18:58:02.147307Z","end":"2024-08-29T18:58:02.393126Z","steps":["trace[728866711] 'read index received'  (duration: 245.635911ms)","trace[728866711] 'applied index is now lower than readState.Index'  (duration: 183.347µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-29T18:58:02.393518Z","caller":"traceutil/trace.go:171","msg":"trace[873289508] transaction","detail":"{read_only:false; response_revision:1181; number_of_response:1; }","duration":"341.313574ms","start":"2024-08-29T18:58:02.052193Z","end":"2024-08-29T18:58:02.393507Z","steps":["trace[873289508] 'process raft request'  (duration: 340.79613ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:58:02.393725Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T18:58:02.052166Z","time spent":"341.432789ms","remote":"127.0.0.1:36120","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1176 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-29T18:58:02.393897Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.589501ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:58:02.394190Z","caller":"traceutil/trace.go:171","msg":"trace[902699692] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1181; }","duration":"246.879248ms","start":"2024-08-29T18:58:02.147300Z","end":"2024-08-29T18:58:02.394179Z","steps":["trace[902699692] 'agreement among raft nodes before linearized reading'  (duration: 246.576816ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:58:02.394293Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.664177ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:58:02.395198Z","caller":"traceutil/trace.go:171","msg":"trace[135783897] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1181; }","duration":"151.567211ms","start":"2024-08-29T18:58:02.243617Z","end":"2024-08-29T18:58:02.395184Z","steps":["trace[135783897] 'agreement among raft nodes before linearized reading'  (duration: 150.639245ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T19:06:24.607992Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1539}
	{"level":"info","ts":"2024-08-29T19:06:24.642194Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1539,"took":"33.69634ms","hash":279284985,"current-db-size-bytes":6299648,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":3297280,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-08-29T19:06:24.642264Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":279284985,"revision":1539,"compact-revision":-1}
	{"level":"info","ts":"2024-08-29T19:08:39.629048Z","caller":"traceutil/trace.go:171","msg":"trace[518339055] transaction","detail":"{read_only:false; response_revision:2617; number_of_response:1; }","duration":"111.246564ms","start":"2024-08-29T19:08:39.517761Z","end":"2024-08-29T19:08:39.629008Z","steps":["trace[518339055] 'process raft request'  (duration: 111.127875ms)"],"step_count":1}
	
	
	==> gcp-auth [e9c8e7bcbfebef995192df5d64f603c9d08c91b1a799ccb037398d00858d33ba] <==
	2024/08/29 18:58:45 Ready to write response ...
	2024/08/29 19:06:55 Ready to marshal response ...
	2024/08/29 19:06:55 Ready to write response ...
	2024/08/29 19:06:59 Ready to marshal response ...
	2024/08/29 19:06:59 Ready to write response ...
	2024/08/29 19:07:05 Ready to marshal response ...
	2024/08/29 19:07:05 Ready to write response ...
	2024/08/29 19:07:23 Ready to marshal response ...
	2024/08/29 19:07:23 Ready to write response ...
	2024/08/29 19:07:27 Ready to marshal response ...
	2024/08/29 19:07:27 Ready to write response ...
	2024/08/29 19:07:27 Ready to marshal response ...
	2024/08/29 19:07:27 Ready to write response ...
	2024/08/29 19:07:39 Ready to marshal response ...
	2024/08/29 19:07:39 Ready to write response ...
	2024/08/29 19:07:45 Ready to marshal response ...
	2024/08/29 19:07:45 Ready to write response ...
	2024/08/29 19:07:45 Ready to marshal response ...
	2024/08/29 19:07:45 Ready to write response ...
	2024/08/29 19:07:45 Ready to marshal response ...
	2024/08/29 19:07:45 Ready to write response ...
	2024/08/29 19:08:03 Ready to marshal response ...
	2024/08/29 19:08:03 Ready to write response ...
	2024/08/29 19:10:24 Ready to marshal response ...
	2024/08/29 19:10:24 Ready to write response ...
	
	
	==> kernel <==
	 19:10:34 up 14 min,  0 users,  load average: 0.39, 0.70, 0.54
	Linux addons-344587 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0829 18:58:14.652298       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0829 18:58:14.653137       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0829 19:06:55.073349       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0829 19:06:56.105761       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0829 19:07:09.910156       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0829 19:07:38.524327       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 19:07:38.524397       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 19:07:38.583728       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 19:07:38.583790       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 19:07:38.603042       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 19:07:38.603098       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 19:07:38.701294       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 19:07:38.701595       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 19:07:38.708794       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 19:07:38.708838       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0829 19:07:39.703951       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0829 19:07:39.709278       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0829 19:07:39.726588       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0829 19:07:45.314768       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.44.227"}
	E0829 19:07:55.404266       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0829 19:08:03.038521       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0829 19:08:03.213568       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.215.4"}
	I0829 19:10:24.590158       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.51.103"}
	
	
	==> kube-controller-manager [79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24] <==
	W0829 19:08:59.845629       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:08:59.845810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:09:00.851038       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:09:00.851096       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:09:18.444475       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:09:18.444540       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:09:34.194248       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:09:34.194331       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:09:47.892984       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:09:47.893071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:09:59.781790       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:09:59.781876       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:10:05.826564       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:10:05.826804       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:10:07.395330       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:10:07.395435       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 19:10:24.424742       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="46.410806ms"
	I0829 19:10:24.437656       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="12.824987ms"
	I0829 19:10:24.437780       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="39.225µs"
	I0829 19:10:26.717665       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0829 19:10:26.724323       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0829 19:10:26.727122       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="4.505µs"
	I0829 19:10:26.959475       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.144704ms"
	I0829 19:10:26.959759       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="38.334µs"
	I0829 19:10:33.553274       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-344587"
	
	
	==> kube-proxy [e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 18:56:34.273624       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 18:56:34.316749       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.172"]
	E0829 18:56:34.319592       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 18:56:34.449783       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 18:56:34.449822       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 18:56:34.449854       1 server_linux.go:169] "Using iptables Proxier"
	I0829 18:56:34.453213       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 18:56:34.453462       1 server.go:483] "Version info" version="v1.31.0"
	I0829 18:56:34.453493       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:56:34.455243       1 config.go:197] "Starting service config controller"
	I0829 18:56:34.455281       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 18:56:34.455307       1 config.go:104] "Starting endpoint slice config controller"
	I0829 18:56:34.455311       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 18:56:34.455326       1 config.go:326] "Starting node config controller"
	I0829 18:56:34.455330       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 18:56:34.555742       1 shared_informer.go:320] Caches are synced for node config
	I0829 18:56:34.555769       1 shared_informer.go:320] Caches are synced for service config
	I0829 18:56:34.555789       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef] <==
	W0829 18:56:25.839581       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0829 18:56:25.839612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:25.839779       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0829 18:56:25.839809       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:25.839854       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 18:56:25.839956       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:25.839906       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 18:56:25.840087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:25.844097       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0829 18:56:25.844135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:26.723397       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0829 18:56:26.723437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:26.726980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0829 18:56:26.727065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:26.764739       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0829 18:56:26.764937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:26.775798       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 18:56:26.775919       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:26.923490       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0829 18:56:26.923522       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0829 18:56:26.981178       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0829 18:56:26.981308       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:27.115445       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 18:56:27.115543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0829 18:56:29.434203       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 19:10:25 addons-344587 kubelet[1210]: I0829 19:10:25.965611    1210 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e8b2ab20b747ce79ca67305e50785bc96f6ff948c77d73873122f96ac5595f1"} err="failed to get container status \"5e8b2ab20b747ce79ca67305e50785bc96f6ff948c77d73873122f96ac5595f1\": rpc error: code = NotFound desc = could not find container \"5e8b2ab20b747ce79ca67305e50785bc96f6ff948c77d73873122f96ac5595f1\": container with ID starting with 5e8b2ab20b747ce79ca67305e50785bc96f6ff948c77d73873122f96ac5595f1 not found: ID does not exist"
	Aug 29 19:10:26 addons-344587 kubelet[1210]: I0829 19:10:26.223315    1210 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aeaeabc-ac3f-4f8a-88ee-84fe5d623dd6" path="/var/lib/kubelet/pods/2aeaeabc-ac3f-4f8a-88ee-84fe5d623dd6/volumes"
	Aug 29 19:10:28 addons-344587 kubelet[1210]: I0829 19:10:28.222783    1210 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26eb4b3d-75b0-44e5-8385-aece13b52992" path="/var/lib/kubelet/pods/26eb4b3d-75b0-44e5-8385-aece13b52992/volumes"
	Aug 29 19:10:28 addons-344587 kubelet[1210]: I0829 19:10:28.223594    1210 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fdd788a-a4b3-4d16-b6e2-4ccfe541fb0a" path="/var/lib/kubelet/pods/5fdd788a-a4b3-4d16-b6e2-4ccfe541fb0a/volumes"
	Aug 29 19:10:28 addons-344587 kubelet[1210]: E0829 19:10:28.252911    1210 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 19:10:28 addons-344587 kubelet[1210]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 19:10:28 addons-344587 kubelet[1210]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 19:10:28 addons-344587 kubelet[1210]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 19:10:28 addons-344587 kubelet[1210]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 19:10:28 addons-344587 kubelet[1210]: E0829 19:10:28.792490    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958628791944551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:10:28 addons-344587 kubelet[1210]: E0829 19:10:28.792532    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958628791944551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:10:28 addons-344587 kubelet[1210]: I0829 19:10:28.797121    1210 scope.go:117] "RemoveContainer" containerID="286779fd868c769dcf2e26f927dd181738b4a91e58bec5a8ee5d995db0b917b9"
	Aug 29 19:10:28 addons-344587 kubelet[1210]: I0829 19:10:28.812492    1210 scope.go:117] "RemoveContainer" containerID="51c4b8b338abf63fdf30f1d9ec94d133498fcba3b2a128daf3e6ddc3ea17561f"
	Aug 29 19:10:29 addons-344587 kubelet[1210]: I0829 19:10:29.953818    1210 scope.go:117] "RemoveContainer" containerID="a5f92ee66e147ab00440c7ce62942db9b0cb067796c3643b3313a45071577e8e"
	Aug 29 19:10:29 addons-344587 kubelet[1210]: I0829 19:10:29.967871    1210 scope.go:117] "RemoveContainer" containerID="a5f92ee66e147ab00440c7ce62942db9b0cb067796c3643b3313a45071577e8e"
	Aug 29 19:10:29 addons-344587 kubelet[1210]: E0829 19:10:29.968349    1210 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5f92ee66e147ab00440c7ce62942db9b0cb067796c3643b3313a45071577e8e\": container with ID starting with a5f92ee66e147ab00440c7ce62942db9b0cb067796c3643b3313a45071577e8e not found: ID does not exist" containerID="a5f92ee66e147ab00440c7ce62942db9b0cb067796c3643b3313a45071577e8e"
	Aug 29 19:10:29 addons-344587 kubelet[1210]: I0829 19:10:29.968383    1210 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5f92ee66e147ab00440c7ce62942db9b0cb067796c3643b3313a45071577e8e"} err="failed to get container status \"a5f92ee66e147ab00440c7ce62942db9b0cb067796c3643b3313a45071577e8e\": rpc error: code = NotFound desc = could not find container \"a5f92ee66e147ab00440c7ce62942db9b0cb067796c3643b3313a45071577e8e\": container with ID starting with a5f92ee66e147ab00440c7ce62942db9b0cb067796c3643b3313a45071577e8e not found: ID does not exist"
	Aug 29 19:10:30 addons-344587 kubelet[1210]: I0829 19:10:30.003865    1210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e6ea94c8-ff1c-47d8-9e9c-6df136e34608-webhook-cert\") pod \"e6ea94c8-ff1c-47d8-9e9c-6df136e34608\" (UID: \"e6ea94c8-ff1c-47d8-9e9c-6df136e34608\") "
	Aug 29 19:10:30 addons-344587 kubelet[1210]: I0829 19:10:30.003946    1210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42tns\" (UniqueName: \"kubernetes.io/projected/e6ea94c8-ff1c-47d8-9e9c-6df136e34608-kube-api-access-42tns\") pod \"e6ea94c8-ff1c-47d8-9e9c-6df136e34608\" (UID: \"e6ea94c8-ff1c-47d8-9e9c-6df136e34608\") "
	Aug 29 19:10:30 addons-344587 kubelet[1210]: I0829 19:10:30.006185    1210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6ea94c8-ff1c-47d8-9e9c-6df136e34608-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "e6ea94c8-ff1c-47d8-9e9c-6df136e34608" (UID: "e6ea94c8-ff1c-47d8-9e9c-6df136e34608"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 29 19:10:30 addons-344587 kubelet[1210]: I0829 19:10:30.007098    1210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6ea94c8-ff1c-47d8-9e9c-6df136e34608-kube-api-access-42tns" (OuterVolumeSpecName: "kube-api-access-42tns") pod "e6ea94c8-ff1c-47d8-9e9c-6df136e34608" (UID: "e6ea94c8-ff1c-47d8-9e9c-6df136e34608"). InnerVolumeSpecName "kube-api-access-42tns". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 19:10:30 addons-344587 kubelet[1210]: I0829 19:10:30.104632    1210 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e6ea94c8-ff1c-47d8-9e9c-6df136e34608-webhook-cert\") on node \"addons-344587\" DevicePath \"\""
	Aug 29 19:10:30 addons-344587 kubelet[1210]: I0829 19:10:30.104749    1210 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-42tns\" (UniqueName: \"kubernetes.io/projected/e6ea94c8-ff1c-47d8-9e9c-6df136e34608-kube-api-access-42tns\") on node \"addons-344587\" DevicePath \"\""
	Aug 29 19:10:30 addons-344587 kubelet[1210]: E0829 19:10:30.222573    1210 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="ea5b7a84-ebbb-47e9-92c8-ee98926439ae"
	Aug 29 19:10:30 addons-344587 kubelet[1210]: I0829 19:10:30.224078    1210 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6ea94c8-ff1c-47d8-9e9c-6df136e34608" path="/var/lib/kubelet/pods/e6ea94c8-ff1c-47d8-9e9c-6df136e34608/volumes"
	
	
	==> storage-provisioner [15a0a245e481d700614dae7bfc6d7d4bbe9f074615c88e1373f761abb63e08cc] <==
	I0829 18:56:42.218258       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 18:56:42.884144       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 18:56:42.967214       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 18:56:43.042082       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 18:56:43.043164       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"aa6f651c-dee9-4c5c-bb08-efe5aaec9d98", APIVersion:"v1", ResourceVersion:"717", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-344587_b654bfc4-1e7a-4b37-abe2-9c326f1dacc1 became leader
	I0829 18:56:43.043203       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-344587_b654bfc4-1e7a-4b37-abe2-9c326f1dacc1!
	I0829 18:56:43.212968       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-344587_b654bfc4-1e7a-4b37-abe2-9c326f1dacc1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-344587 -n addons-344587
helpers_test.go:261: (dbg) Run:  kubectl --context addons-344587 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-344587 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-344587 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-344587/192.168.39.172
	Start Time:       Thu, 29 Aug 2024 18:58:45 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bb56t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bb56t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  11m                 default-scheduler  Successfully assigned default/busybox to addons-344587
	  Normal   Pulling    10m (x4 over 11m)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     10m (x4 over 11m)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     10m (x4 over 11m)   kubelet            Error: ErrImagePull
	  Warning  Failed     10m (x6 over 11m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    98s (x43 over 11m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.03s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (352.98s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.461891ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-9tplt" [427d61c8-9ff3-4718-9faf-896d20af6cdc] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00404314s
addons_test.go:417: (dbg) Run:  kubectl --context addons-344587 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-344587 top pods -n kube-system: exit status 1 (82.587273ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-t9nhw, age: 10m21.674841524s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-344587 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-344587 top pods -n kube-system: exit status 1 (65.077461ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-t9nhw, age: 10m23.674207866s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-344587 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-344587 top pods -n kube-system: exit status 1 (69.235656ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-t9nhw, age: 10m28.871517046s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-344587 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-344587 top pods -n kube-system: exit status 1 (69.787823ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-t9nhw, age: 10m37.837855387s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-344587 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-344587 top pods -n kube-system: exit status 1 (62.795753ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-t9nhw, age: 10m47.754584772s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-344587 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-344587 top pods -n kube-system: exit status 1 (63.807806ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-t9nhw, age: 11m9.343010598s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-344587 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-344587 top pods -n kube-system: exit status 1 (65.227535ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-t9nhw, age: 11m30.071927987s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-344587 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-344587 top pods -n kube-system: exit status 1 (64.524512ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-t9nhw, age: 12m17.301056271s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-344587 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-344587 top pods -n kube-system: exit status 1 (62.695868ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-t9nhw, age: 13m14.753688345s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-344587 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-344587 top pods -n kube-system: exit status 1 (64.878044ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-t9nhw, age: 13m56.419089165s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-344587 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-344587 top pods -n kube-system: exit status 1 (61.770799ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-t9nhw, age: 14m40.916925485s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-344587 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-344587 top pods -n kube-system: exit status 1 (63.714861ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-t9nhw, age: 16m5.793542178s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-344587 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-344587 -n addons-344587
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-344587 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-344587 logs -n 25: (1.366241453s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-800504                                                                     | download-only-800504 | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC | 29 Aug 24 18:55 UTC |
	| delete  | -p download-only-273933                                                                     | download-only-273933 | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC | 29 Aug 24 18:55 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-124601 | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC |                     |
	|         | binary-mirror-124601                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41153                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-124601                                                                     | binary-mirror-124601 | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC | 29 Aug 24 18:55 UTC |
	| addons  | enable dashboard -p                                                                         | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC |                     |
	|         | addons-344587                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC |                     |
	|         | addons-344587                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-344587 --wait=true                                                                | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC | 29 Aug 24 18:58 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:06 UTC | 29 Aug 24 19:07 UTC |
	|         | addons-344587                                                                               |                      |         |         |                     |                     |
	| addons  | addons-344587 addons disable                                                                | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:07 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-344587 addons disable                                                                | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:07 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:07 UTC |
	|         | -p addons-344587                                                                            |                      |         |         |                     |                     |
	| addons  | addons-344587 addons                                                                        | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:07 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-344587 addons                                                                        | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:07 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-344587 ssh cat                                                                       | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:07 UTC |
	|         | /opt/local-path-provisioner/pvc-d653ba56-6232-4797-9e26-74b3f827dc87_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-344587 addons disable                                                                | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:08 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:07 UTC |
	|         | addons-344587                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:07 UTC |
	|         | -p addons-344587                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-344587 addons disable                                                                | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:08 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-344587 ip                                                                            | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:07 UTC |
	| addons  | addons-344587 addons disable                                                                | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:07 UTC | 29 Aug 24 19:08 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-344587 ssh curl -s                                                                   | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:08 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-344587 ip                                                                            | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:10 UTC | 29 Aug 24 19:10 UTC |
	| addons  | addons-344587 addons disable                                                                | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:10 UTC | 29 Aug 24 19:10 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-344587 addons disable                                                                | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:10 UTC | 29 Aug 24 19:10 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-344587 addons                                                                        | addons-344587        | jenkins | v1.33.1 | 29 Aug 24 19:12 UTC | 29 Aug 24 19:12 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:55:50
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:55:50.381982   18990 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:55:50.382091   18990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:55:50.382099   18990 out.go:358] Setting ErrFile to fd 2...
	I0829 18:55:50.382103   18990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:55:50.382261   18990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 18:55:50.382847   18990 out.go:352] Setting JSON to false
	I0829 18:55:50.383602   18990 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2297,"bootTime":1724955453,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:55:50.383652   18990 start.go:139] virtualization: kvm guest
	I0829 18:55:50.385939   18990 out.go:177] * [addons-344587] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:55:50.387376   18990 out.go:177]   - MINIKUBE_LOCATION=19530
	I0829 18:55:50.387387   18990 notify.go:220] Checking for updates...
	I0829 18:55:50.389960   18990 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:55:50.391173   18990 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 18:55:50.392418   18990 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 18:55:50.393615   18990 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:55:50.394904   18990 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:55:50.396433   18990 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:55:50.428475   18990 out.go:177] * Using the kvm2 driver based on user configuration
	I0829 18:55:50.429854   18990 start.go:297] selected driver: kvm2
	I0829 18:55:50.429864   18990 start.go:901] validating driver "kvm2" against <nil>
	I0829 18:55:50.429873   18990 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:55:50.430509   18990 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:55:50.430589   18990 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19530-11185/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 18:55:50.444888   18990 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 18:55:50.444932   18990 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:55:50.445130   18990 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:55:50.445196   18990 cni.go:84] Creating CNI manager for ""
	I0829 18:55:50.445212   18990 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 18:55:50.445222   18990 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 18:55:50.445293   18990 start.go:340] cluster config:
	{Name:addons-344587 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-344587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:55:50.445402   18990 iso.go:125] acquiring lock: {Name:mk1c9d3ac7f423dd4657884e37bdf4359f6328d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:55:50.447108   18990 out.go:177] * Starting "addons-344587" primary control-plane node in "addons-344587" cluster
	I0829 18:55:50.448355   18990 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:55:50.448396   18990 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 18:55:50.448405   18990 cache.go:56] Caching tarball of preloaded images
	I0829 18:55:50.448475   18990 preload.go:172] Found /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 18:55:50.448487   18990 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 18:55:50.448826   18990 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/config.json ...
	I0829 18:55:50.448852   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/config.json: {Name:mkbebd6be4c06f31a480a2816ef4d17f65638f42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:55:50.448990   18990 start.go:360] acquireMachinesLock for addons-344587: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 18:55:50.449049   18990 start.go:364] duration metric: took 44.089µs to acquireMachinesLock for "addons-344587"
	I0829 18:55:50.449073   18990 start.go:93] Provisioning new machine with config: &{Name:addons-344587 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-344587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:55:50.449138   18990 start.go:125] createHost starting for "" (driver="kvm2")
	I0829 18:55:50.450643   18990 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0829 18:55:50.450772   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:55:50.450820   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:55:50.464579   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36837
	I0829 18:55:50.464968   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:55:50.465424   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:55:50.465444   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:55:50.465798   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:55:50.465987   18990 main.go:141] libmachine: (addons-344587) Calling .GetMachineName
	I0829 18:55:50.466159   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:55:50.466300   18990 start.go:159] libmachine.API.Create for "addons-344587" (driver="kvm2")
	I0829 18:55:50.466328   18990 client.go:168] LocalClient.Create starting
	I0829 18:55:50.466375   18990 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem
	I0829 18:55:50.795899   18990 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem
	I0829 18:55:50.842743   18990 main.go:141] libmachine: Running pre-create checks...
	I0829 18:55:50.842764   18990 main.go:141] libmachine: (addons-344587) Calling .PreCreateCheck
	I0829 18:55:50.843261   18990 main.go:141] libmachine: (addons-344587) Calling .GetConfigRaw
	I0829 18:55:50.843665   18990 main.go:141] libmachine: Creating machine...
	I0829 18:55:50.843678   18990 main.go:141] libmachine: (addons-344587) Calling .Create
	I0829 18:55:50.843802   18990 main.go:141] libmachine: (addons-344587) Creating KVM machine...
	I0829 18:55:50.844841   18990 main.go:141] libmachine: (addons-344587) DBG | found existing default KVM network
	I0829 18:55:50.845576   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:50.845449   19012 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0829 18:55:50.845599   18990 main.go:141] libmachine: (addons-344587) DBG | created network xml: 
	I0829 18:55:50.845612   18990 main.go:141] libmachine: (addons-344587) DBG | <network>
	I0829 18:55:50.845626   18990 main.go:141] libmachine: (addons-344587) DBG |   <name>mk-addons-344587</name>
	I0829 18:55:50.845668   18990 main.go:141] libmachine: (addons-344587) DBG |   <dns enable='no'/>
	I0829 18:55:50.845695   18990 main.go:141] libmachine: (addons-344587) DBG |   
	I0829 18:55:50.845709   18990 main.go:141] libmachine: (addons-344587) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0829 18:55:50.845719   18990 main.go:141] libmachine: (addons-344587) DBG |     <dhcp>
	I0829 18:55:50.845731   18990 main.go:141] libmachine: (addons-344587) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0829 18:55:50.845742   18990 main.go:141] libmachine: (addons-344587) DBG |     </dhcp>
	I0829 18:55:50.845753   18990 main.go:141] libmachine: (addons-344587) DBG |   </ip>
	I0829 18:55:50.845762   18990 main.go:141] libmachine: (addons-344587) DBG |   
	I0829 18:55:50.845771   18990 main.go:141] libmachine: (addons-344587) DBG | </network>
	I0829 18:55:50.845781   18990 main.go:141] libmachine: (addons-344587) DBG | 
	I0829 18:55:50.850798   18990 main.go:141] libmachine: (addons-344587) DBG | trying to create private KVM network mk-addons-344587 192.168.39.0/24...
	I0829 18:55:50.914004   18990 main.go:141] libmachine: (addons-344587) DBG | private KVM network mk-addons-344587 192.168.39.0/24 created
	I0829 18:55:50.914032   18990 main.go:141] libmachine: (addons-344587) Setting up store path in /home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587 ...
	I0829 18:55:50.914058   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:50.913976   19012 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 18:55:50.914082   18990 main.go:141] libmachine: (addons-344587) Building disk image from file:///home/jenkins/minikube-integration/19530-11185/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso
	I0829 18:55:50.914101   18990 main.go:141] libmachine: (addons-344587) Downloading /home/jenkins/minikube-integration/19530-11185/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19530-11185/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso...
	I0829 18:55:51.165621   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:51.165525   19012 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa...
	I0829 18:55:51.361310   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:51.361174   19012 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/addons-344587.rawdisk...
	I0829 18:55:51.361334   18990 main.go:141] libmachine: (addons-344587) DBG | Writing magic tar header
	I0829 18:55:51.361345   18990 main.go:141] libmachine: (addons-344587) DBG | Writing SSH key tar header
	I0829 18:55:51.361360   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:51.361285   19012 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587 ...
	I0829 18:55:51.361376   18990 main.go:141] libmachine: (addons-344587) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587
	I0829 18:55:51.361413   18990 main.go:141] libmachine: (addons-344587) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587 (perms=drwx------)
	I0829 18:55:51.361435   18990 main.go:141] libmachine: (addons-344587) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube/machines (perms=drwxr-xr-x)
	I0829 18:55:51.361442   18990 main.go:141] libmachine: (addons-344587) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube (perms=drwxr-xr-x)
	I0829 18:55:51.361449   18990 main.go:141] libmachine: (addons-344587) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube/machines
	I0829 18:55:51.361457   18990 main.go:141] libmachine: (addons-344587) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 18:55:51.361462   18990 main.go:141] libmachine: (addons-344587) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185
	I0829 18:55:51.361470   18990 main.go:141] libmachine: (addons-344587) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0829 18:55:51.361480   18990 main.go:141] libmachine: (addons-344587) DBG | Checking permissions on dir: /home/jenkins
	I0829 18:55:51.361487   18990 main.go:141] libmachine: (addons-344587) DBG | Checking permissions on dir: /home
	I0829 18:55:51.361492   18990 main.go:141] libmachine: (addons-344587) DBG | Skipping /home - not owner
	I0829 18:55:51.361519   18990 main.go:141] libmachine: (addons-344587) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185 (perms=drwxrwxr-x)
	I0829 18:55:51.361543   18990 main.go:141] libmachine: (addons-344587) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0829 18:55:51.361552   18990 main.go:141] libmachine: (addons-344587) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0829 18:55:51.361557   18990 main.go:141] libmachine: (addons-344587) Creating domain...
	I0829 18:55:51.362695   18990 main.go:141] libmachine: (addons-344587) define libvirt domain using xml: 
	I0829 18:55:51.362720   18990 main.go:141] libmachine: (addons-344587) <domain type='kvm'>
	I0829 18:55:51.362728   18990 main.go:141] libmachine: (addons-344587)   <name>addons-344587</name>
	I0829 18:55:51.362733   18990 main.go:141] libmachine: (addons-344587)   <memory unit='MiB'>4000</memory>
	I0829 18:55:51.362739   18990 main.go:141] libmachine: (addons-344587)   <vcpu>2</vcpu>
	I0829 18:55:51.362743   18990 main.go:141] libmachine: (addons-344587)   <features>
	I0829 18:55:51.362748   18990 main.go:141] libmachine: (addons-344587)     <acpi/>
	I0829 18:55:51.362755   18990 main.go:141] libmachine: (addons-344587)     <apic/>
	I0829 18:55:51.362760   18990 main.go:141] libmachine: (addons-344587)     <pae/>
	I0829 18:55:51.362764   18990 main.go:141] libmachine: (addons-344587)     
	I0829 18:55:51.362770   18990 main.go:141] libmachine: (addons-344587)   </features>
	I0829 18:55:51.362775   18990 main.go:141] libmachine: (addons-344587)   <cpu mode='host-passthrough'>
	I0829 18:55:51.362780   18990 main.go:141] libmachine: (addons-344587)   
	I0829 18:55:51.362786   18990 main.go:141] libmachine: (addons-344587)   </cpu>
	I0829 18:55:51.362794   18990 main.go:141] libmachine: (addons-344587)   <os>
	I0829 18:55:51.362799   18990 main.go:141] libmachine: (addons-344587)     <type>hvm</type>
	I0829 18:55:51.362807   18990 main.go:141] libmachine: (addons-344587)     <boot dev='cdrom'/>
	I0829 18:55:51.362812   18990 main.go:141] libmachine: (addons-344587)     <boot dev='hd'/>
	I0829 18:55:51.362820   18990 main.go:141] libmachine: (addons-344587)     <bootmenu enable='no'/>
	I0829 18:55:51.362826   18990 main.go:141] libmachine: (addons-344587)   </os>
	I0829 18:55:51.362855   18990 main.go:141] libmachine: (addons-344587)   <devices>
	I0829 18:55:51.362878   18990 main.go:141] libmachine: (addons-344587)     <disk type='file' device='cdrom'>
	I0829 18:55:51.362903   18990 main.go:141] libmachine: (addons-344587)       <source file='/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/boot2docker.iso'/>
	I0829 18:55:51.362920   18990 main.go:141] libmachine: (addons-344587)       <target dev='hdc' bus='scsi'/>
	I0829 18:55:51.362957   18990 main.go:141] libmachine: (addons-344587)       <readonly/>
	I0829 18:55:51.362969   18990 main.go:141] libmachine: (addons-344587)     </disk>
	I0829 18:55:51.362980   18990 main.go:141] libmachine: (addons-344587)     <disk type='file' device='disk'>
	I0829 18:55:51.362995   18990 main.go:141] libmachine: (addons-344587)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0829 18:55:51.363011   18990 main.go:141] libmachine: (addons-344587)       <source file='/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/addons-344587.rawdisk'/>
	I0829 18:55:51.363023   18990 main.go:141] libmachine: (addons-344587)       <target dev='hda' bus='virtio'/>
	I0829 18:55:51.363036   18990 main.go:141] libmachine: (addons-344587)     </disk>
	I0829 18:55:51.363048   18990 main.go:141] libmachine: (addons-344587)     <interface type='network'>
	I0829 18:55:51.363068   18990 main.go:141] libmachine: (addons-344587)       <source network='mk-addons-344587'/>
	I0829 18:55:51.363090   18990 main.go:141] libmachine: (addons-344587)       <model type='virtio'/>
	I0829 18:55:51.363098   18990 main.go:141] libmachine: (addons-344587)     </interface>
	I0829 18:55:51.363103   18990 main.go:141] libmachine: (addons-344587)     <interface type='network'>
	I0829 18:55:51.363119   18990 main.go:141] libmachine: (addons-344587)       <source network='default'/>
	I0829 18:55:51.363133   18990 main.go:141] libmachine: (addons-344587)       <model type='virtio'/>
	I0829 18:55:51.363144   18990 main.go:141] libmachine: (addons-344587)     </interface>
	I0829 18:55:51.363151   18990 main.go:141] libmachine: (addons-344587)     <serial type='pty'>
	I0829 18:55:51.363157   18990 main.go:141] libmachine: (addons-344587)       <target port='0'/>
	I0829 18:55:51.363165   18990 main.go:141] libmachine: (addons-344587)     </serial>
	I0829 18:55:51.363192   18990 main.go:141] libmachine: (addons-344587)     <console type='pty'>
	I0829 18:55:51.363222   18990 main.go:141] libmachine: (addons-344587)       <target type='serial' port='0'/>
	I0829 18:55:51.363237   18990 main.go:141] libmachine: (addons-344587)     </console>
	I0829 18:55:51.363245   18990 main.go:141] libmachine: (addons-344587)     <rng model='virtio'>
	I0829 18:55:51.363258   18990 main.go:141] libmachine: (addons-344587)       <backend model='random'>/dev/random</backend>
	I0829 18:55:51.363267   18990 main.go:141] libmachine: (addons-344587)     </rng>
	I0829 18:55:51.363279   18990 main.go:141] libmachine: (addons-344587)     
	I0829 18:55:51.363289   18990 main.go:141] libmachine: (addons-344587)     
	I0829 18:55:51.363301   18990 main.go:141] libmachine: (addons-344587)   </devices>
	I0829 18:55:51.363319   18990 main.go:141] libmachine: (addons-344587) </domain>
	I0829 18:55:51.363334   18990 main.go:141] libmachine: (addons-344587) 
	I0829 18:55:51.369959   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:1d:b5:8e in network default
	I0829 18:55:51.370417   18990 main.go:141] libmachine: (addons-344587) Ensuring networks are active...
	I0829 18:55:51.370435   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:51.371026   18990 main.go:141] libmachine: (addons-344587) Ensuring network default is active
	I0829 18:55:51.371287   18990 main.go:141] libmachine: (addons-344587) Ensuring network mk-addons-344587 is active
	I0829 18:55:51.372284   18990 main.go:141] libmachine: (addons-344587) Getting domain xml...
	I0829 18:55:51.372893   18990 main.go:141] libmachine: (addons-344587) Creating domain...
	I0829 18:55:52.746079   18990 main.go:141] libmachine: (addons-344587) Waiting to get IP...
	I0829 18:55:52.746802   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:52.747139   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:52.747169   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:52.747092   19012 retry.go:31] will retry after 281.547466ms: waiting for machine to come up
	I0829 18:55:53.030572   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:53.031020   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:53.031046   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:53.030987   19012 retry.go:31] will retry after 320.244389ms: waiting for machine to come up
	I0829 18:55:53.352319   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:53.352723   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:53.352751   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:53.352677   19012 retry.go:31] will retry after 475.897243ms: waiting for machine to come up
	I0829 18:55:53.830271   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:53.830799   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:53.830826   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:53.830758   19012 retry.go:31] will retry after 415.393917ms: waiting for machine to come up
	I0829 18:55:54.247242   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:54.247686   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:54.247722   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:54.247646   19012 retry.go:31] will retry after 663.283802ms: waiting for machine to come up
	I0829 18:55:54.912468   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:54.912891   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:54.912917   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:54.912861   19012 retry.go:31] will retry after 823.255008ms: waiting for machine to come up
	I0829 18:55:55.737292   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:55.737672   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:55.737702   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:55.737654   19012 retry.go:31] will retry after 924.09927ms: waiting for machine to come up
	I0829 18:55:56.663683   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:56.664092   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:56.664117   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:56.664046   19012 retry.go:31] will retry after 1.475206367s: waiting for machine to come up
	I0829 18:55:58.141547   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:58.142031   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:58.142052   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:58.142003   19012 retry.go:31] will retry after 1.352228994s: waiting for machine to come up
	I0829 18:55:59.496409   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:55:59.496870   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:55:59.496896   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:55:59.496821   19012 retry.go:31] will retry after 2.187164775s: waiting for machine to come up
	I0829 18:56:01.685976   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:01.686371   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:56:01.686393   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:56:01.686346   19012 retry.go:31] will retry after 2.735265922s: waiting for machine to come up
	I0829 18:56:04.422715   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:04.423157   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:56:04.423172   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:56:04.423133   19012 retry.go:31] will retry after 2.867752561s: waiting for machine to come up
	I0829 18:56:07.292218   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:07.292615   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find current IP address of domain addons-344587 in network mk-addons-344587
	I0829 18:56:07.292641   18990 main.go:141] libmachine: (addons-344587) DBG | I0829 18:56:07.292570   19012 retry.go:31] will retry after 4.389513147s: waiting for machine to come up
	I0829 18:56:11.683601   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:11.684092   18990 main.go:141] libmachine: (addons-344587) Found IP for machine: 192.168.39.172
	I0829 18:56:11.684118   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has current primary IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:11.684127   18990 main.go:141] libmachine: (addons-344587) Reserving static IP address...
	I0829 18:56:11.684501   18990 main.go:141] libmachine: (addons-344587) DBG | unable to find host DHCP lease matching {name: "addons-344587", mac: "52:54:00:03:42:33", ip: "192.168.39.172"} in network mk-addons-344587
	I0829 18:56:11.822664   18990 main.go:141] libmachine: (addons-344587) DBG | Getting to WaitForSSH function...
	I0829 18:56:11.822759   18990 main.go:141] libmachine: (addons-344587) Reserved static IP address: 192.168.39.172
	I0829 18:56:11.822780   18990 main.go:141] libmachine: (addons-344587) Waiting for SSH to be available...
	I0829 18:56:11.825035   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:11.825430   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:minikube Clientid:01:52:54:00:03:42:33}
	I0829 18:56:11.825460   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:11.825623   18990 main.go:141] libmachine: (addons-344587) DBG | Using SSH client type: external
	I0829 18:56:11.825652   18990 main.go:141] libmachine: (addons-344587) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa (-rw-------)
	I0829 18:56:11.825693   18990 main.go:141] libmachine: (addons-344587) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.172 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 18:56:11.825713   18990 main.go:141] libmachine: (addons-344587) DBG | About to run SSH command:
	I0829 18:56:11.825728   18990 main.go:141] libmachine: (addons-344587) DBG | exit 0
	I0829 18:56:11.958392   18990 main.go:141] libmachine: (addons-344587) DBG | SSH cmd err, output: <nil>: 
	I0829 18:56:11.958658   18990 main.go:141] libmachine: (addons-344587) KVM machine creation complete!
	I0829 18:56:11.958964   18990 main.go:141] libmachine: (addons-344587) Calling .GetConfigRaw
	I0829 18:56:11.979533   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:11.979843   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:11.980024   18990 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0829 18:56:11.980042   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:11.981444   18990 main.go:141] libmachine: Detecting operating system of created instance...
	I0829 18:56:11.981459   18990 main.go:141] libmachine: Waiting for SSH to be available...
	I0829 18:56:11.981466   18990 main.go:141] libmachine: Getting to WaitForSSH function...
	I0829 18:56:11.981474   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:11.983980   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:11.984292   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:11.984313   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:11.984444   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:11.984613   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:11.984770   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:11.984916   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:11.985127   18990 main.go:141] libmachine: Using SSH client type: native
	I0829 18:56:11.985342   18990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0829 18:56:11.985357   18990 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0829 18:56:12.089723   18990 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:56:12.089742   18990 main.go:141] libmachine: Detecting the provisioner...
	I0829 18:56:12.089749   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:12.092754   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.093106   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:12.093131   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.093284   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:12.093486   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:12.093657   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:12.093787   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:12.093942   18990 main.go:141] libmachine: Using SSH client type: native
	I0829 18:56:12.094126   18990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0829 18:56:12.094139   18990 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0829 18:56:12.199320   18990 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0829 18:56:12.199392   18990 main.go:141] libmachine: found compatible host: buildroot
	I0829 18:56:12.199401   18990 main.go:141] libmachine: Provisioning with buildroot...
	I0829 18:56:12.199410   18990 main.go:141] libmachine: (addons-344587) Calling .GetMachineName
	I0829 18:56:12.199644   18990 buildroot.go:166] provisioning hostname "addons-344587"
	I0829 18:56:12.199675   18990 main.go:141] libmachine: (addons-344587) Calling .GetMachineName
	I0829 18:56:12.199823   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:12.202332   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.202658   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:12.202684   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.202849   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:12.203092   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:12.203227   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:12.203390   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:12.203529   18990 main.go:141] libmachine: Using SSH client type: native
	I0829 18:56:12.203692   18990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0829 18:56:12.203705   18990 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-344587 && echo "addons-344587" | sudo tee /etc/hostname
	I0829 18:56:12.320497   18990 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-344587
	
	I0829 18:56:12.320526   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:12.323075   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.323387   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:12.323411   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.323589   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:12.323786   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:12.323975   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:12.324113   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:12.324283   18990 main.go:141] libmachine: Using SSH client type: native
	I0829 18:56:12.324480   18990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0829 18:56:12.324504   18990 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-344587' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-344587/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-344587' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 18:56:12.439927   18990 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:56:12.439966   18990 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 18:56:12.440002   18990 buildroot.go:174] setting up certificates
	I0829 18:56:12.440016   18990 provision.go:84] configureAuth start
	I0829 18:56:12.440030   18990 main.go:141] libmachine: (addons-344587) Calling .GetMachineName
	I0829 18:56:12.440343   18990 main.go:141] libmachine: (addons-344587) Calling .GetIP
	I0829 18:56:12.442796   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.443174   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:12.443192   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.443334   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:12.445622   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.446147   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:12.446173   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.446336   18990 provision.go:143] copyHostCerts
	I0829 18:56:12.446417   18990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 18:56:12.446555   18990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 18:56:12.446655   18990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 18:56:12.446738   18990 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.addons-344587 san=[127.0.0.1 192.168.39.172 addons-344587 localhost minikube]
	I0829 18:56:12.656811   18990 provision.go:177] copyRemoteCerts
	I0829 18:56:12.656860   18990 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 18:56:12.656881   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:12.659602   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.659950   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:12.659986   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.660127   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:12.660284   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:12.660452   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:12.660569   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:12.740979   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 18:56:12.764765   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 18:56:12.789751   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0829 18:56:12.814999   18990 provision.go:87] duration metric: took 374.97013ms to configureAuth
	I0829 18:56:12.815029   18990 buildroot.go:189] setting minikube options for container-runtime
	I0829 18:56:12.815219   18990 config.go:182] Loaded profile config "addons-344587": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:56:12.815307   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:12.817789   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.818126   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:12.818155   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:12.818312   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:12.818507   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:12.818700   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:12.818849   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:12.819046   18990 main.go:141] libmachine: Using SSH client type: native
	I0829 18:56:12.819234   18990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0829 18:56:12.819254   18990 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 18:56:13.034009   18990 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 18:56:13.034032   18990 main.go:141] libmachine: Checking connection to Docker...
	I0829 18:56:13.034040   18990 main.go:141] libmachine: (addons-344587) Calling .GetURL
	I0829 18:56:13.035499   18990 main.go:141] libmachine: (addons-344587) DBG | Using libvirt version 6000000
	I0829 18:56:13.037684   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.038017   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:13.038048   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.038196   18990 main.go:141] libmachine: Docker is up and running!
	I0829 18:56:13.038210   18990 main.go:141] libmachine: Reticulating splines...
	I0829 18:56:13.038219   18990 client.go:171] duration metric: took 22.571881082s to LocalClient.Create
	I0829 18:56:13.038239   18990 start.go:167] duration metric: took 22.5719417s to libmachine.API.Create "addons-344587"
	I0829 18:56:13.038262   18990 start.go:293] postStartSetup for "addons-344587" (driver="kvm2")
	I0829 18:56:13.038277   18990 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 18:56:13.038298   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:13.038570   18990 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 18:56:13.038589   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:13.040755   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.041066   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:13.041089   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.041223   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:13.041426   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:13.041595   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:13.041734   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:13.124537   18990 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 18:56:13.129327   18990 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 18:56:13.129348   18990 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 18:56:13.129400   18990 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 18:56:13.129423   18990 start.go:296] duration metric: took 91.15174ms for postStartSetup
	I0829 18:56:13.129451   18990 main.go:141] libmachine: (addons-344587) Calling .GetConfigRaw
	I0829 18:56:13.130128   18990 main.go:141] libmachine: (addons-344587) Calling .GetIP
	I0829 18:56:13.132903   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.133252   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:13.133280   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.133484   18990 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/config.json ...
	I0829 18:56:13.133661   18990 start.go:128] duration metric: took 22.68451279s to createHost
	I0829 18:56:13.133686   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:13.135794   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.136096   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:13.136138   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.136227   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:13.136392   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:13.136531   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:13.136674   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:13.136811   18990 main.go:141] libmachine: Using SSH client type: native
	I0829 18:56:13.136983   18990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0829 18:56:13.136995   18990 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 18:56:13.239138   18990 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724957773.212403643
	
	I0829 18:56:13.239157   18990 fix.go:216] guest clock: 1724957773.212403643
	I0829 18:56:13.239164   18990 fix.go:229] Guest: 2024-08-29 18:56:13.212403643 +0000 UTC Remote: 2024-08-29 18:56:13.133675132 +0000 UTC m=+22.790316868 (delta=78.728511ms)
	I0829 18:56:13.239198   18990 fix.go:200] guest clock delta is within tolerance: 78.728511ms
	I0829 18:56:13.239202   18990 start.go:83] releasing machines lock for "addons-344587", held for 22.79014265s
	I0829 18:56:13.239220   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:13.239471   18990 main.go:141] libmachine: (addons-344587) Calling .GetIP
	I0829 18:56:13.241933   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.242288   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:13.242315   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.242500   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:13.243032   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:13.243240   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:13.243311   18990 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 18:56:13.243361   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:13.243466   18990 ssh_runner.go:195] Run: cat /version.json
	I0829 18:56:13.243481   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:13.245923   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.246013   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.246307   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:13.246336   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:13.246367   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.246384   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:13.246467   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:13.246620   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:13.246682   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:13.246812   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:13.246884   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:13.246957   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:13.247020   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:13.247050   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:13.348157   18990 ssh_runner.go:195] Run: systemctl --version
	I0829 18:56:13.354123   18990 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 18:56:13.512934   18990 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 18:56:13.518830   18990 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 18:56:13.518882   18990 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 18:56:13.534127   18990 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 18:56:13.534157   18990 start.go:495] detecting cgroup driver to use...
	I0829 18:56:13.534210   18990 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 18:56:13.549103   18990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 18:56:13.562524   18990 docker.go:217] disabling cri-docker service (if available) ...
	I0829 18:56:13.562603   18990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 18:56:13.575308   18990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 18:56:13.588019   18990 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 18:56:13.695971   18990 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 18:56:13.849315   18990 docker.go:233] disabling docker service ...
	I0829 18:56:13.849370   18990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 18:56:13.863202   18990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 18:56:13.876345   18990 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 18:56:13.998451   18990 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 18:56:14.110447   18990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 18:56:14.124269   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:56:14.142618   18990 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 18:56:14.142671   18990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:56:14.152550   18990 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 18:56:14.152638   18990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:56:14.162565   18990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:56:14.172204   18990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:56:14.182051   18990 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 18:56:14.191938   18990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:56:14.201619   18990 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:56:14.218380   18990 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:56:14.228433   18990 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 18:56:14.237357   18990 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 18:56:14.237406   18990 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 18:56:14.249575   18990 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 18:56:14.259454   18990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:56:14.369394   18990 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 18:56:14.456184   18990 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 18:56:14.456279   18990 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 18:56:14.460789   18990 start.go:563] Will wait 60s for crictl version
	I0829 18:56:14.460854   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:56:14.464432   18990 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 18:56:14.504874   18990 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 18:56:14.504990   18990 ssh_runner.go:195] Run: crio --version
	I0829 18:56:14.532672   18990 ssh_runner.go:195] Run: crio --version
	I0829 18:56:14.561543   18990 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 18:56:14.562632   18990 main.go:141] libmachine: (addons-344587) Calling .GetIP
	I0829 18:56:14.564933   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:14.565284   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:14.565303   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:14.565524   18990 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 18:56:14.569376   18990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:56:14.581262   18990 kubeadm.go:883] updating cluster {Name:addons-344587 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-344587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 18:56:14.581356   18990 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:56:14.581398   18990 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 18:56:14.613224   18990 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 18:56:14.613292   18990 ssh_runner.go:195] Run: which lz4
	I0829 18:56:14.617034   18990 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 18:56:14.621198   18990 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 18:56:14.621221   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 18:56:15.914421   18990 crio.go:462] duration metric: took 1.297408054s to copy over tarball
	I0829 18:56:15.914486   18990 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 18:56:18.044985   18990 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.130478632s)
	I0829 18:56:18.045014   18990 crio.go:469] duration metric: took 2.130566777s to extract the tarball
	I0829 18:56:18.045024   18990 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 18:56:18.081642   18990 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 18:56:18.123715   18990 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 18:56:18.123734   18990 cache_images.go:84] Images are preloaded, skipping loading
	I0829 18:56:18.123741   18990 kubeadm.go:934] updating node { 192.168.39.172 8443 v1.31.0 crio true true} ...
	I0829 18:56:18.123833   18990 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-344587 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-344587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 18:56:18.123903   18990 ssh_runner.go:195] Run: crio config
	I0829 18:56:18.173364   18990 cni.go:84] Creating CNI manager for ""
	I0829 18:56:18.173382   18990 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 18:56:18.173396   18990 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 18:56:18.173417   18990 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.172 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-344587 NodeName:addons-344587 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 18:56:18.173545   18990 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-344587"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.172
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.172"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 18:56:18.173599   18990 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 18:56:18.183496   18990 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 18:56:18.183559   18990 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 18:56:18.192837   18990 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0829 18:56:18.209828   18990 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 18:56:18.226818   18990 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0829 18:56:18.243177   18990 ssh_runner.go:195] Run: grep 192.168.39.172	control-plane.minikube.internal$ /etc/hosts
	I0829 18:56:18.246821   18990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.172	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:56:18.258454   18990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:56:18.380809   18990 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:56:18.399109   18990 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587 for IP: 192.168.39.172
	I0829 18:56:18.399130   18990 certs.go:194] generating shared ca certs ...
	I0829 18:56:18.399144   18990 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:18.399287   18990 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 18:56:18.507759   18990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt ...
	I0829 18:56:18.507786   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt: {Name:mkf2998f14816a9d649599681f5ace2bd3b15bb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:18.507943   18990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key ...
	I0829 18:56:18.507953   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key: {Name:mk0f1ef094971ea9c3f026c8290bde66a6036be5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:18.508026   18990 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 18:56:18.881398   18990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt ...
	I0829 18:56:18.881427   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt: {Name:mka4d0216f76512ed90b83996ade7ed626417b29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:18.881614   18990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key ...
	I0829 18:56:18.881630   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key: {Name:mka035b87075afcde930c062c2cb1875970dabb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:18.881727   18990 certs.go:256] generating profile certs ...
	I0829 18:56:18.881782   18990 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.key
	I0829 18:56:18.881793   18990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt with IP's: []
	I0829 18:56:19.191129   18990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt ...
	I0829 18:56:19.191157   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: {Name:mk595166ed3f22afaf54fdfb0b502bd573fc8143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:19.191339   18990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.key ...
	I0829 18:56:19.191354   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.key: {Name:mk24baca044bca79b73024c8a04b788113a0b022 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:19.191449   18990 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.key.2d54a6f6
	I0829 18:56:19.191470   18990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.crt.2d54a6f6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.172]
	I0829 18:56:19.236337   18990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.crt.2d54a6f6 ...
	I0829 18:56:19.236366   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.crt.2d54a6f6: {Name:mk40299d1f1b871b96fc8c21ef18cc9e856fbcfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:19.236555   18990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.key.2d54a6f6 ...
	I0829 18:56:19.236572   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.key.2d54a6f6: {Name:mk04263d045cce1f76651eeb698397ced0bec497 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:19.236669   18990 certs.go:381] copying /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.crt.2d54a6f6 -> /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.crt
	I0829 18:56:19.236739   18990 certs.go:385] copying /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.key.2d54a6f6 -> /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.key
	I0829 18:56:19.236796   18990 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/proxy-client.key
	I0829 18:56:19.236809   18990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/proxy-client.crt with IP's: []
	I0829 18:56:19.327890   18990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/proxy-client.crt ...
	I0829 18:56:19.327915   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/proxy-client.crt: {Name:mked626427b26604c6ca53369dde755937686f96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:19.328088   18990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/proxy-client.key ...
	I0829 18:56:19.328101   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/proxy-client.key: {Name:mk953deec79398c279f957cbebec5a918222e73e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:19.328285   18990 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 18:56:19.328319   18990 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 18:56:19.328339   18990 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 18:56:19.328360   18990 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 18:56:19.328883   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 18:56:19.352639   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 18:56:19.375448   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 18:56:19.397949   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 18:56:19.420280   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0829 18:56:19.445420   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 18:56:19.469941   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 18:56:19.493954   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 18:56:19.517481   18990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 18:56:19.540749   18990 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 18:56:19.557225   18990 ssh_runner.go:195] Run: openssl version
	I0829 18:56:19.563530   18990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 18:56:19.574899   18990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:56:19.579661   18990 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:56:19.579718   18990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:56:19.585781   18990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 18:56:19.596492   18990 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 18:56:19.600908   18990 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 18:56:19.600959   18990 kubeadm.go:392] StartCluster: {Name:addons-344587 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-344587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:56:19.601045   18990 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 18:56:19.601093   18990 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 18:56:19.637569   18990 cri.go:89] found id: ""
	I0829 18:56:19.637643   18990 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 18:56:19.647689   18990 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 18:56:19.657011   18990 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 18:56:19.666328   18990 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 18:56:19.666343   18990 kubeadm.go:157] found existing configuration files:
	
	I0829 18:56:19.666376   18990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 18:56:19.675716   18990 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 18:56:19.675775   18990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 18:56:19.685386   18990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 18:56:19.694416   18990 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 18:56:19.694471   18990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 18:56:19.703922   18990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 18:56:19.712826   18990 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 18:56:19.712873   18990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 18:56:19.722059   18990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 18:56:19.731001   18990 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 18:56:19.731050   18990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 18:56:19.740296   18990 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 18:56:19.794947   18990 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 18:56:19.795099   18990 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 18:56:19.898282   18990 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 18:56:19.898409   18990 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 18:56:19.898526   18990 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 18:56:19.907493   18990 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 18:56:19.997177   18990 out.go:235]   - Generating certificates and keys ...
	I0829 18:56:19.997273   18990 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 18:56:19.997359   18990 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 18:56:19.997433   18990 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0829 18:56:20.334115   18990 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0829 18:56:20.488051   18990 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0829 18:56:20.567263   18990 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0829 18:56:20.715089   18990 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0829 18:56:20.715281   18990 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-344587 localhost] and IPs [192.168.39.172 127.0.0.1 ::1]
	I0829 18:56:21.029598   18990 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0829 18:56:21.029765   18990 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-344587 localhost] and IPs [192.168.39.172 127.0.0.1 ::1]
	I0829 18:56:21.106114   18990 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0829 18:56:21.317964   18990 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0829 18:56:21.407628   18990 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0829 18:56:21.407696   18990 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 18:56:21.629562   18990 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 18:56:21.754916   18990 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 18:56:21.931143   18990 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 18:56:22.124355   18990 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 18:56:22.279253   18990 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 18:56:22.279642   18990 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 18:56:22.282088   18990 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 18:56:22.284191   18990 out.go:235]   - Booting up control plane ...
	I0829 18:56:22.284310   18990 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 18:56:22.284403   18990 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 18:56:22.284482   18990 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 18:56:22.304603   18990 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 18:56:22.312804   18990 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 18:56:22.312862   18990 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 18:56:22.435203   18990 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 18:56:22.435353   18990 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 18:56:22.936484   18990 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.995362ms
	I0829 18:56:22.936601   18990 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 18:56:27.436410   18990 kubeadm.go:310] [api-check] The API server is healthy after 4.501398688s
	I0829 18:56:27.454666   18990 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 18:56:27.477429   18990 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 18:56:27.508526   18990 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 18:56:27.508785   18990 kubeadm.go:310] [mark-control-plane] Marking the node addons-344587 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 18:56:27.520541   18990 kubeadm.go:310] [bootstrap-token] Using token: q9x0a1.3m9323w9pql012fx
	I0829 18:56:27.521864   18990 out.go:235]   - Configuring RBAC rules ...
	I0829 18:56:27.521972   18990 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 18:56:27.526125   18990 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 18:56:27.535702   18990 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 18:56:27.539676   18990 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 18:56:27.542568   18990 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 18:56:27.548387   18990 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 18:56:27.844139   18990 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 18:56:28.295458   18990 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 18:56:28.840023   18990 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 18:56:28.840956   18990 kubeadm.go:310] 
	I0829 18:56:28.841053   18990 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 18:56:28.841063   18990 kubeadm.go:310] 
	I0829 18:56:28.841160   18990 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 18:56:28.841186   18990 kubeadm.go:310] 
	I0829 18:56:28.841234   18990 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 18:56:28.841322   18990 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 18:56:28.841395   18990 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 18:56:28.841404   18990 kubeadm.go:310] 
	I0829 18:56:28.841484   18990 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 18:56:28.841493   18990 kubeadm.go:310] 
	I0829 18:56:28.841553   18990 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 18:56:28.841567   18990 kubeadm.go:310] 
	I0829 18:56:28.841651   18990 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 18:56:28.841761   18990 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 18:56:28.841862   18990 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 18:56:28.841873   18990 kubeadm.go:310] 
	I0829 18:56:28.841975   18990 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 18:56:28.842087   18990 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 18:56:28.842100   18990 kubeadm.go:310] 
	I0829 18:56:28.842176   18990 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token q9x0a1.3m9323w9pql012fx \
	I0829 18:56:28.842267   18990 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef \
	I0829 18:56:28.842293   18990 kubeadm.go:310] 	--control-plane 
	I0829 18:56:28.842300   18990 kubeadm.go:310] 
	I0829 18:56:28.842391   18990 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 18:56:28.842407   18990 kubeadm.go:310] 
	I0829 18:56:28.842491   18990 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token q9x0a1.3m9323w9pql012fx \
	I0829 18:56:28.842651   18990 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef 
	I0829 18:56:28.843797   18990 kubeadm.go:310] W0829 18:56:19.773325     811 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:56:28.844133   18990 kubeadm.go:310] W0829 18:56:19.774616     811 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:56:28.844272   18990 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 18:56:28.844299   18990 cni.go:84] Creating CNI manager for ""
	I0829 18:56:28.844312   18990 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 18:56:28.846071   18990 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 18:56:28.847426   18990 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 18:56:28.857688   18990 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 18:56:28.878902   18990 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 18:56:28.878958   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:28.878992   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-344587 minikube.k8s.io/updated_at=2024_08_29T18_56_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033 minikube.k8s.io/name=addons-344587 minikube.k8s.io/primary=true
	I0829 18:56:28.907190   18990 ops.go:34] apiserver oom_adj: -16
	I0829 18:56:29.042348   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:29.543055   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:30.042999   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:30.542653   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:31.042960   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:31.542779   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:32.042560   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:32.543114   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:33.043338   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:33.542400   18990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:33.657610   18990 kubeadm.go:1113] duration metric: took 4.778691649s to wait for elevateKubeSystemPrivileges
	I0829 18:56:33.657651   18990 kubeadm.go:394] duration metric: took 14.056694589s to StartCluster
	I0829 18:56:33.657673   18990 settings.go:142] acquiring lock: {Name:mka4cd5ddff5796cd0ca11509c181178f4f73529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:33.657802   18990 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 18:56:33.658294   18990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:33.658498   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0829 18:56:33.658563   18990 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:56:33.658614   18990 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0829 18:56:33.658712   18990 addons.go:69] Setting yakd=true in profile "addons-344587"
	I0829 18:56:33.658734   18990 config.go:182] Loaded profile config "addons-344587": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:56:33.658743   18990 addons.go:234] Setting addon yakd=true in "addons-344587"
	I0829 18:56:33.658752   18990 addons.go:69] Setting helm-tiller=true in profile "addons-344587"
	I0829 18:56:33.658761   18990 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-344587"
	I0829 18:56:33.658774   18990 addons.go:69] Setting registry=true in profile "addons-344587"
	I0829 18:56:33.658781   18990 addons.go:69] Setting gcp-auth=true in profile "addons-344587"
	I0829 18:56:33.658782   18990 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-344587"
	I0829 18:56:33.658779   18990 addons.go:69] Setting cloud-spanner=true in profile "addons-344587"
	I0829 18:56:33.658790   18990 addons.go:69] Setting volumesnapshots=true in profile "addons-344587"
	I0829 18:56:33.658799   18990 mustload.go:65] Loading cluster: addons-344587
	I0829 18:56:33.658800   18990 addons.go:234] Setting addon registry=true in "addons-344587"
	I0829 18:56:33.658807   18990 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-344587"
	I0829 18:56:33.658811   18990 addons.go:234] Setting addon cloud-spanner=true in "addons-344587"
	I0829 18:56:33.658813   18990 addons.go:234] Setting addon volumesnapshots=true in "addons-344587"
	I0829 18:56:33.658831   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.658833   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.658834   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.658837   18990 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-344587"
	I0829 18:56:33.658846   18990 addons.go:69] Setting ingress=true in profile "addons-344587"
	I0829 18:56:33.658863   18990 addons.go:234] Setting addon ingress=true in "addons-344587"
	I0829 18:56:33.658865   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.658889   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.658925   18990 config.go:182] Loaded profile config "addons-344587": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:56:33.658836   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.658783   18990 addons.go:69] Setting volcano=true in profile "addons-344587"
	I0829 18:56:33.659252   18990 addons.go:234] Setting addon volcano=true in "addons-344587"
	I0829 18:56:33.659252   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.659267   18990 addons.go:69] Setting storage-provisioner=true in profile "addons-344587"
	I0829 18:56:33.659273   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.659282   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659289   18990 addons.go:234] Setting addon storage-provisioner=true in "addons-344587"
	I0829 18:56:33.659291   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.659309   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.659322   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.659337   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659368   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.659400   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659432   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.659479   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659549   18990 addons.go:234] Setting addon helm-tiller=true in "addons-344587"
	I0829 18:56:33.658761   18990 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-344587"
	I0829 18:56:33.659610   18990 addons.go:69] Setting inspektor-gadget=true in profile "addons-344587"
	I0829 18:56:33.659640   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.659688   18990 addons.go:234] Setting addon inspektor-gadget=true in "addons-344587"
	I0829 18:56:33.659310   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659869   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.660003   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.660033   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659615   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.660105   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.660245   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.660276   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659252   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.660606   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659251   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.661085   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.658772   18990 addons.go:69] Setting default-storageclass=true in profile "addons-344587"
	I0829 18:56:33.670709   18990 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-344587"
	I0829 18:56:33.658775   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.659589   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.670898   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659619   18990 addons.go:69] Setting ingress-dns=true in profile "addons-344587"
	I0829 18:56:33.671005   18990 addons.go:234] Setting addon ingress-dns=true in "addons-344587"
	I0829 18:56:33.659626   18990 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-344587"
	I0829 18:56:33.671326   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.671369   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.671373   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.671404   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.671440   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.659631   18990 addons.go:69] Setting metrics-server=true in profile "addons-344587"
	I0829 18:56:33.671513   18990 addons.go:234] Setting addon metrics-server=true in "addons-344587"
	I0829 18:56:33.671545   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.666650   18990 out.go:177] * Verifying Kubernetes components...
	I0829 18:56:33.671400   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.671874   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.671911   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.671053   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.673380   18990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:56:33.680875   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43131
	I0829 18:56:33.681397   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.682001   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.682020   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.682079   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43679
	I0829 18:56:33.682558   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.682826   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33189
	I0829 18:56:33.683408   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.683427   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.683497   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.683572   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.684019   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.684047   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.684281   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.684297   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.684418   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.684576   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42785
	I0829 18:56:33.684658   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.685239   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.686589   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.686991   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.687043   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.687652   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.687695   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.693010   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41217
	I0829 18:56:33.693441   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.693863   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.693883   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.694222   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.695000   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.695017   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.695025   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.695047   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.695732   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.696286   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.696303   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.696680   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.697196   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.697227   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.708509   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35881
	I0829 18:56:33.709301   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.710007   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.710025   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.710503   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.711094   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.711134   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.712697   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38887
	I0829 18:56:33.713230   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.713833   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.713849   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.714249   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.714858   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.714894   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.717065   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40851
	I0829 18:56:33.718427   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40143
	I0829 18:56:33.719007   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.719015   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.719564   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.719572   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.719583   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.719587   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.719642   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34313
	I0829 18:56:33.719977   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.720035   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.720529   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.720541   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.720576   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.720813   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41587
	I0829 18:56:33.721306   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.721600   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.721641   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.721789   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.721802   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.722188   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.722210   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.722517   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.722694   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.722698   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.722766   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40775
	I0829 18:56:33.722933   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.723536   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.723992   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.724014   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.724373   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.724931   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.724969   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.727773   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42855
	I0829 18:56:33.728271   18990 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-344587"
	I0829 18:56:33.728315   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.728668   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.728713   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.729106   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.730100   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.730122   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.730424   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.731053   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.731091   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.733084   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34629
	I0829 18:56:33.733526   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.733981   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.734008   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.734333   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.734516   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.737054   18990 addons.go:234] Setting addon default-storageclass=true in "addons-344587"
	I0829 18:56:33.737093   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:33.737442   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.737488   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.738915   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39099
	I0829 18:56:33.739299   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.739833   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.739858   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.740211   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.740415   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.740698   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44413
	I0829 18:56:33.742219   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.742936   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.743473   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.743489   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.743850   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.744037   18990 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0829 18:56:33.744401   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.744444   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.746700   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39535
	I0829 18:56:33.747098   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.747483   18990 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:56:33.747541   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.747555   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.747849   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.748004   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.749857   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.750156   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:33.750162   18990 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:56:33.750176   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:33.750401   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:33.750415   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:33.750424   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:33.750431   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:33.751640   18990 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 18:56:33.751661   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0829 18:56:33.751678   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.753152   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41841
	I0829 18:56:33.753646   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.754124   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.754140   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.754697   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:33.754792   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.754874   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:33.754882   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	W0829 18:56:33.754961   18990 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0829 18:56:33.755222   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.756506   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.757125   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.757153   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.757371   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.757578   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.757800   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.758075   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.758388   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43277
	I0829 18:56:33.758562   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.758742   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.759174   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.759196   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.759539   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.759780   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.760361   18990 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0829 18:56:33.761354   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.761672   18990 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0829 18:56:33.761690   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0829 18:56:33.761708   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.762934   18990 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0829 18:56:33.764151   18990 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0829 18:56:33.764172   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0829 18:56:33.764189   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.765442   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.766030   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.766066   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.766227   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.766373   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.766477   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.766582   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.769791   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.770164   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40387
	I0829 18:56:33.770308   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.770326   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.770497   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.770711   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.770718   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.770896   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.770957   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44035
	I0829 18:56:33.771212   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.771224   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.771288   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.771479   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.772071   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.772254   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.772754   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34945
	I0829 18:56:33.773389   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.773405   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.773886   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.774057   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.774157   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.774715   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.774741   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.775380   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.775427   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.775749   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45059
	I0829 18:56:33.775868   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.776433   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.776878   18990 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0829 18:56:33.777144   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.777191   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.777462   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.777480   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.777870   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.778380   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.779809   18990 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0829 18:56:33.780037   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.780244   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35549
	I0829 18:56:33.780742   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.781222   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.781248   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.781369   18990 out.go:177]   - Using image docker.io/registry:2.8.3
	I0829 18:56:33.781563   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.781761   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.782643   18990 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0829 18:56:33.783896   18990 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0829 18:56:33.783960   18990 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0829 18:56:33.784513   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39187
	I0829 18:56:33.784537   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44109
	I0829 18:56:33.784631   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.785647   18990 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0829 18:56:33.785668   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0829 18:56:33.785684   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.786509   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35783
	I0829 18:56:33.786516   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.786797   18990 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0829 18:56:33.786858   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.787121   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.787314   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.787336   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.787455   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.787473   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.787695   18990 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0829 18:56:33.787783   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.787912   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.787932   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.788077   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.788137   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.788311   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.788468   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.788893   18990 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0829 18:56:33.788909   18990 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0829 18:56:33.788928   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.788930   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.788964   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.789455   18990 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0829 18:56:33.790036   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.790306   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.791044   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.791435   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.791870   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.791965   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.792018   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.792283   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.792425   18990 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0829 18:56:33.792449   18990 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0829 18:56:33.792452   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.794946   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.794990   18990 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0829 18:56:33.795455   18990 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0829 18:56:33.795469   18990 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0829 18:56:33.795488   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.796133   18990 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0829 18:56:33.796965   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36103
	I0829 18:56:33.797111   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.797138   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.797278   18990 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0829 18:56:33.797280   18990 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0829 18:56:33.797524   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.797291   18990 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0829 18:56:33.797627   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.798327   18990 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0829 18:56:33.798342   18990 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0829 18:56:33.798363   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.798941   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.798951   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.799251   18990 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:56:33.799263   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0829 18:56:33.799281   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.799598   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.800343   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.800449   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.800465   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.801716   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.801738   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36645
	I0829 18:56:33.801793   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38425
	I0829 18:56:33.802890   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.803204   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.803825   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.803851   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.805182   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.805201   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.805215   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39841
	I0829 18:56:33.805244   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.805256   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.805307   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.805329   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.805337   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.805789   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.805807   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.805811   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.805825   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.805840   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.805855   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.805882   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.806005   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.806215   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.806223   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.806256   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.806267   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.806359   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.806379   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.806397   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.806440   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.806668   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.806692   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.806708   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.806860   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.806874   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.806956   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.807013   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.807169   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.807174   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.807190   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.807219   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.807331   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.807354   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.807446   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.807495   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.807556   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.807727   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.808112   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:33.808136   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:33.809511   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.810029   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.811278   18990 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0829 18:56:33.812241   18990 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 18:56:33.813129   18990 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 18:56:33.813157   18990 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 18:56:33.813176   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.813977   18990 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:56:33.814001   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 18:56:33.814020   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.816395   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.816894   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.816918   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.817062   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.817195   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46807
	I0829 18:56:33.817338   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.817486   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.817534   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.817610   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.817837   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.818145   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.818173   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.818354   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.818387   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.818455   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.818655   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.818809   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.818859   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.818932   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.819357   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.820909   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	W0829 18:56:33.822691   18990 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0829 18:56:33.822718   18990 retry.go:31] will retry after 259.989848ms: ssh: handshake failed: EOF
	I0829 18:56:33.823215   18990 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0829 18:56:33.824471   18990 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 18:56:33.824488   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0829 18:56:33.824506   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.827030   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37461
	I0829 18:56:33.827117   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35417
	I0829 18:56:33.827401   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.827435   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.827587   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:33.827788   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.827812   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.827906   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.827920   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.828022   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.828075   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:33.828084   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:33.828178   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.828317   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.828370   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.828385   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:33.828540   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.828549   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.828724   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:33.830078   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.830313   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:33.830515   18990 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 18:56:33.830530   18990 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 18:56:33.830569   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.831752   18990 out.go:177]   - Using image docker.io/busybox:stable
	I0829 18:56:33.832911   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.833222   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.833244   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.833366   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.833520   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.833767   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.833891   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:33.834507   18990 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	W0829 18:56:33.835031   18990 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36686->192.168.39.172:22: read: connection reset by peer
	I0829 18:56:33.835058   18990 retry.go:31] will retry after 162.890781ms: ssh: handshake failed: read tcp 192.168.39.1:36686->192.168.39.172:22: read: connection reset by peer
	I0829 18:56:33.835835   18990 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:56:33.835851   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0829 18:56:33.835865   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:33.838727   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.839112   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:33.839133   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:33.839296   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:33.839446   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:33.839562   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:33.839657   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	W0829 18:56:33.848450   18990 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36700->192.168.39.172:22: read: connection reset by peer
	I0829 18:56:33.848470   18990 retry.go:31] will retry after 306.282122ms: ssh: handshake failed: read tcp 192.168.39.1:36700->192.168.39.172:22: read: connection reset by peer
	W0829 18:56:33.999144   18990 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36716->192.168.39.172:22: read: connection reset by peer
	I0829 18:56:33.999169   18990 retry.go:31] will retry after 424.61405ms: ssh: handshake failed: read tcp 192.168.39.1:36716->192.168.39.172:22: read: connection reset by peer
	I0829 18:56:34.150588   18990 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0829 18:56:34.150609   18990 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0829 18:56:34.160016   18990 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0829 18:56:34.160035   18990 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0829 18:56:34.209643   18990 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0829 18:56:34.209668   18990 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0829 18:56:34.212743   18990 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0829 18:56:34.212768   18990 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0829 18:56:34.219438   18990 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0829 18:56:34.219459   18990 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0829 18:56:34.225374   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:56:34.251243   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 18:56:34.341500   18990 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 18:56:34.341523   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0829 18:56:34.345542   18990 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0829 18:56:34.345561   18990 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0829 18:56:34.360911   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 18:56:34.367165   18990 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:56:34.367441   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0829 18:56:34.408618   18990 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:56:34.408647   18990 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0829 18:56:34.412998   18990 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0829 18:56:34.413014   18990 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0829 18:56:34.414485   18990 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0829 18:56:34.414505   18990 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0829 18:56:34.416656   18990 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0829 18:56:34.416674   18990 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0829 18:56:34.421178   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0829 18:56:34.441682   18990 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0829 18:56:34.441714   18990 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0829 18:56:34.537974   18990 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:56:34.537995   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0829 18:56:34.574614   18990 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 18:56:34.574648   18990 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 18:56:34.590779   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:56:34.613418   18990 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0829 18:56:34.613450   18990 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0829 18:56:34.621890   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:56:34.650966   18990 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0829 18:56:34.650990   18990 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0829 18:56:34.656296   18990 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0829 18:56:34.656330   18990 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0829 18:56:34.662014   18990 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0829 18:56:34.662031   18990 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0829 18:56:34.755855   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:56:34.777852   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:56:34.841238   18990 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0829 18:56:34.841264   18990 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0829 18:56:34.860870   18990 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0829 18:56:34.860892   18990 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0829 18:56:34.865493   18990 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:56:34.865518   18990 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 18:56:34.891256   18990 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:56:34.891275   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0829 18:56:34.918114   18990 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0829 18:56:34.918134   18990 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0829 18:56:35.013931   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 18:56:35.035330   18990 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:56:35.035353   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0829 18:56:35.037518   18990 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0829 18:56:35.037536   18990 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0829 18:56:35.083605   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:56:35.084671   18990 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0829 18:56:35.084696   18990 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0829 18:56:35.109523   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:56:35.214128   18990 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0829 18:56:35.214165   18990 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0829 18:56:35.300619   18990 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0829 18:56:35.300639   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0829 18:56:35.319430   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:56:35.382473   18990 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:56:35.382493   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0829 18:56:35.500157   18990 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0829 18:56:35.500185   18990 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0829 18:56:35.637217   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:56:35.652837   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.401553749s)
	I0829 18:56:35.652892   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:35.652903   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:35.652993   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.427586081s)
	I0829 18:56:35.653037   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:35.653049   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:35.653235   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:35.653247   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:35.653256   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:35.653263   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:35.653306   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:35.653327   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:35.653336   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:35.653344   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:35.653512   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:35.653530   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:35.653545   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:35.653571   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:35.653594   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:35.653607   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:35.707655   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:35.707678   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:35.708069   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:35.708091   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:35.708109   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:35.793211   18990 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0829 18:56:35.793237   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0829 18:56:35.962496   18990 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0829 18:56:35.962518   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0829 18:56:36.268015   18990 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:56:36.268034   18990 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0829 18:56:36.448977   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:56:40.901860   18990 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0829 18:56:40.901896   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:40.905410   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:40.905896   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:40.905938   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:40.906115   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:40.906299   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:40.906451   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:40.906626   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:41.427728   18990 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0829 18:56:41.620090   18990 addons.go:234] Setting addon gcp-auth=true in "addons-344587"
	I0829 18:56:41.620153   18990 host.go:66] Checking if "addons-344587" exists ...
	I0829 18:56:41.620485   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:41.620517   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:41.636187   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44453
	I0829 18:56:41.636532   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:41.637102   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:41.637131   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:41.637428   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:41.638003   18990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:41.638024   18990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:41.669951   18990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44101
	I0829 18:56:41.670306   18990 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:41.670883   18990 main.go:141] libmachine: Using API Version  1
	I0829 18:56:41.670910   18990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:41.671238   18990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:41.671489   18990 main.go:141] libmachine: (addons-344587) Calling .GetState
	I0829 18:56:41.673184   18990 main.go:141] libmachine: (addons-344587) Calling .DriverName
	I0829 18:56:41.673393   18990 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0829 18:56:41.673419   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHHostname
	I0829 18:56:41.676410   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:41.676882   18990 main.go:141] libmachine: (addons-344587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:42:33", ip: ""} in network mk-addons-344587: {Iface:virbr1 ExpiryTime:2024-08-29 19:56:05 +0000 UTC Type:0 Mac:52:54:00:03:42:33 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-344587 Clientid:01:52:54:00:03:42:33}
	I0829 18:56:41.676905   18990 main.go:141] libmachine: (addons-344587) DBG | domain addons-344587 has defined IP address 192.168.39.172 and MAC address 52:54:00:03:42:33 in network mk-addons-344587
	I0829 18:56:41.677058   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHPort
	I0829 18:56:41.677211   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHKeyPath
	I0829 18:56:41.677367   18990 main.go:141] libmachine: (addons-344587) Calling .GetSSHUsername
	I0829 18:56:41.677483   18990 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/addons-344587/id_rsa Username:docker}
	I0829 18:56:42.669306   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.308358481s)
	I0829 18:56:42.669349   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.669347   18990 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.301873517s)
	I0829 18:56:42.669360   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.669375   18990 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0829 18:56:42.669424   18990 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.302234847s)
	I0829 18:56:42.669452   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.248249274s)
	I0829 18:56:42.669475   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.669484   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.669555   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.078738309s)
	I0829 18:56:42.669594   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.669607   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.669707   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.047793159s)
	I0829 18:56:42.669725   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.669732   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.669822   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.913945077s)
	I0829 18:56:42.669839   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.669846   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.669914   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.892029945s)
	I0829 18:56:42.669930   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.669939   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.669947   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.669962   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.669971   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.669978   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.670011   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.656061298s)
	I0829 18:56:42.670027   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.670034   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.670035   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.670044   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.670052   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.670058   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.669916   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.670134   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.586501236s)
	I0829 18:56:42.670151   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.670158   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.670232   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.560685281s)
	I0829 18:56:42.670257   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.670266   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.670396   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.35093677s)
	W0829 18:56:42.670434   18990 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:56:42.670454   18990 retry.go:31] will retry after 369.342261ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:56:42.670554   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.670555   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.033295535s)
	I0829 18:56:42.670551   18990 node_ready.go:35] waiting up to 6m0s for node "addons-344587" to be "Ready" ...
	I0829 18:56:42.670566   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.670575   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.670576   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.670583   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.670595   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.670666   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.670668   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.670689   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.670690   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.670697   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.670698   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.670706   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.670706   18990 addons.go:475] Verifying addon ingress=true in "addons-344587"
	I0829 18:56:42.670713   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.671349   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.671369   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.671394   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.671401   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.671409   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.671418   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.671484   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.671504   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.671511   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.671518   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.671524   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.671559   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.671575   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.671582   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.671589   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.671595   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.671628   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.671644   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.671692   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.673185   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.673214   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.673219   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.673225   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.673232   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.674001   18990 out.go:177] * Verifying ingress addon...
	I0829 18:56:42.674369   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.674390   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.674394   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.674400   18990 addons.go:475] Verifying addon registry=true in "addons-344587"
	I0829 18:56:42.674749   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.674780   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.674788   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.675112   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.675155   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.675162   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.675170   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.675177   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.675236   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.675254   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.675260   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.675267   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.675273   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.675309   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.675327   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.675334   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.675407   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:42.675498   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.675505   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.675737   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.675746   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.675813   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.675820   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.676275   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.676288   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.676537   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.676546   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:42.676554   18990 addons.go:475] Verifying addon metrics-server=true in "addons-344587"
	I0829 18:56:42.677460   18990 out.go:177] * Verifying registry addon...
	I0829 18:56:42.678505   18990 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0829 18:56:42.678573   18990 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-344587 service yakd-dashboard -n yakd-dashboard
	
	I0829 18:56:42.679342   18990 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0829 18:56:42.721945   18990 node_ready.go:49] node "addons-344587" has status "Ready":"True"
	I0829 18:56:42.721968   18990 node_ready.go:38] duration metric: took 51.397004ms for node "addons-344587" to be "Ready" ...
	I0829 18:56:42.721979   18990 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:56:42.738146   18990 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0829 18:56:42.738171   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:42.738232   18990 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0829 18:56:42.738245   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:42.763259   18990 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fljpw" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:42.780817   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:42.780844   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:42.781106   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:42.781123   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:43.040019   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:56:43.181593   18990 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-344587" context rescaled to 1 replicas
	I0829 18:56:43.200300   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:43.202392   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:43.854515   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:43.855345   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:44.201644   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:44.202443   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:44.387997   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.938966875s)
	I0829 18:56:44.388024   18990 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.714610192s)
	I0829 18:56:44.388060   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:44.388076   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:44.388398   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:44.388399   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:44.388423   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:44.388436   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:44.388446   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:44.388660   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:44.388693   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:44.388708   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:44.388728   18990 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-344587"
	I0829 18:56:44.390289   18990 out.go:177] * Verifying csi-hostpath-driver addon...
	I0829 18:56:44.390333   18990 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:56:44.391916   18990 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0829 18:56:44.392479   18990 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0829 18:56:44.393017   18990 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0829 18:56:44.393047   18990 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0829 18:56:44.437296   18990 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0829 18:56:44.437316   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:44.516363   18990 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0829 18:56:44.516386   18990 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0829 18:56:44.625475   18990 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:56:44.625495   18990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0829 18:56:44.694404   18990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:56:44.710971   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:44.714864   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:44.801891   18990 pod_ready.go:103] pod "coredns-6f6b679f8f-fljpw" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:44.896516   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:45.184541   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:45.185775   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:45.398848   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:45.581703   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.541635237s)
	I0829 18:56:45.581752   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:45.581767   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:45.582058   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:45.582084   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:45.582095   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:45.582102   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:45.582151   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:45.582390   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:45.582429   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:45.582481   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:45.685365   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:45.685844   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:45.900158   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:46.159769   18990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.465331058s)
	I0829 18:56:46.159816   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:46.159836   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:46.160177   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:46.160214   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:46.160224   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:46.160233   18990 main.go:141] libmachine: Making call to close driver server
	I0829 18:56:46.160240   18990 main.go:141] libmachine: (addons-344587) Calling .Close
	I0829 18:56:46.160479   18990 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:56:46.160495   18990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:56:46.160504   18990 main.go:141] libmachine: (addons-344587) DBG | Closing plugin on server side
	I0829 18:56:46.161466   18990 addons.go:475] Verifying addon gcp-auth=true in "addons-344587"
	I0829 18:56:46.163120   18990 out.go:177] * Verifying gcp-auth addon...
	I0829 18:56:46.165519   18990 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0829 18:56:46.185169   18990 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0829 18:56:46.185187   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:46.226445   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:46.226499   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:46.398205   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:46.671641   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:46.687495   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:46.688298   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:46.899177   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:47.168846   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:47.269193   18990 pod_ready.go:103] pod "coredns-6f6b679f8f-fljpw" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:47.270024   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:47.270716   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:47.398276   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:47.669190   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:47.682897   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:47.683324   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:47.897509   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:48.169032   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:48.184274   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:48.184383   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:48.397220   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:48.669220   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:48.682380   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:48.683472   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:48.896581   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:49.175691   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:49.183464   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:49.184739   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:49.635636   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:49.636948   18990 pod_ready.go:103] pod "coredns-6f6b679f8f-fljpw" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:49.668411   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:49.682494   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:49.683366   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:49.896900   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:50.169141   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:50.182913   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:50.183797   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:50.397440   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:50.669246   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:50.682992   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:50.683296   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:50.766028   18990 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-fljpw" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-fljpw" not found
	I0829 18:56:50.766050   18990 pod_ready.go:82] duration metric: took 8.00276735s for pod "coredns-6f6b679f8f-fljpw" in "kube-system" namespace to be "Ready" ...
	E0829 18:56:50.766059   18990 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-fljpw" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-fljpw" not found
	I0829 18:56:50.766065   18990 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-t9nhw" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.771805   18990 pod_ready.go:93] pod "coredns-6f6b679f8f-t9nhw" in "kube-system" namespace has status "Ready":"True"
	I0829 18:56:50.771843   18990 pod_ready.go:82] duration metric: took 5.770841ms for pod "coredns-6f6b679f8f-t9nhw" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.771858   18990 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-344587" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.778901   18990 pod_ready.go:93] pod "etcd-addons-344587" in "kube-system" namespace has status "Ready":"True"
	I0829 18:56:50.778924   18990 pod_ready.go:82] duration metric: took 7.055033ms for pod "etcd-addons-344587" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.778933   18990 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-344587" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.787991   18990 pod_ready.go:93] pod "kube-apiserver-addons-344587" in "kube-system" namespace has status "Ready":"True"
	I0829 18:56:50.788017   18990 pod_ready.go:82] duration metric: took 9.072661ms for pod "kube-apiserver-addons-344587" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.788030   18990 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-344587" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.795671   18990 pod_ready.go:93] pod "kube-controller-manager-addons-344587" in "kube-system" namespace has status "Ready":"True"
	I0829 18:56:50.795689   18990 pod_ready.go:82] duration metric: took 7.649451ms for pod "kube-controller-manager-addons-344587" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.795700   18990 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lgcxw" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.898617   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:50.968239   18990 pod_ready.go:93] pod "kube-proxy-lgcxw" in "kube-system" namespace has status "Ready":"True"
	I0829 18:56:50.968267   18990 pod_ready.go:82] duration metric: took 172.559179ms for pod "kube-proxy-lgcxw" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:50.968280   18990 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-344587" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:51.170579   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:51.183357   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:51.183460   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:51.367505   18990 pod_ready.go:93] pod "kube-scheduler-addons-344587" in "kube-system" namespace has status "Ready":"True"
	I0829 18:56:51.367538   18990 pod_ready.go:82] duration metric: took 399.24913ms for pod "kube-scheduler-addons-344587" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:51.367550   18990 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:51.397192   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:51.761866   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:51.761991   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:51.762363   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:51.896480   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:52.169660   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:52.186439   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:52.186848   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:52.397676   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:52.669400   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:52.682397   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:52.682617   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:52.897411   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:53.169721   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:53.184323   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:53.184737   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:53.374149   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:53.397202   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:53.668898   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:53.682445   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:53.682704   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:53.897088   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:54.172913   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:54.182087   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:54.184118   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:54.397034   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:54.669527   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:54.683280   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:54.683838   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:54.897205   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:55.169589   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:55.183889   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:55.184209   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:55.375125   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:55.397322   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:55.668681   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:55.682670   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:55.683015   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:56.069249   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:56.168996   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:56.183093   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:56.183177   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:56.397533   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:56.670051   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:56.682495   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:56.683368   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:56.897459   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:57.169182   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:57.183116   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:57.184347   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:57.376144   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:57.397074   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:57.670186   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:57.683268   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:57.684614   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:57.897006   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:58.170523   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:58.183070   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:58.183215   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:58.396251   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:58.670888   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:58.779673   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:58.780745   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:58.902984   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:59.172390   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:59.183921   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:59.186092   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:59.396707   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:59.669214   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:56:59.685853   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:59.685887   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:59.873857   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:59.896170   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:00.169557   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:00.183863   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:00.184197   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:00.396380   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:00.669459   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:00.682973   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:00.684561   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:00.897013   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:01.169578   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:01.183325   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:01.183954   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:01.397283   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:01.669328   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:01.683006   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:01.683150   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:01.896786   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:02.169870   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:02.183250   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:02.184348   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:02.395075   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:02.397237   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:02.669855   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:02.686242   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:02.688366   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:02.898895   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:03.169561   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:03.184647   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:03.185289   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:03.397046   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:03.669112   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:03.682276   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:03.682683   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:03.897109   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:04.168963   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:04.182332   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:04.183973   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:04.396975   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:04.669614   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:04.683124   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:04.683347   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:04.873985   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:04.896943   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:05.169958   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:05.183528   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:05.187121   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:05.398577   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:05.669571   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:05.683191   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:05.683832   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:05.896841   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:06.169501   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:06.183520   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:06.184624   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:06.398361   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:06.668833   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:06.683988   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:06.684316   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:06.897127   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:07.169829   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:07.181726   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:07.183130   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:07.374378   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:07.397367   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:07.850766   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:07.851123   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:07.851346   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:07.897642   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:08.169471   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:08.183648   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:08.184288   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:08.396754   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:08.668753   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:08.683569   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:08.683850   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:08.896776   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:09.169838   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:09.184520   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:09.184757   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:09.561873   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:09.567669   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:09.669578   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:09.683274   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:09.683280   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:09.897002   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:10.169044   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:10.181987   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:10.182399   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:10.397928   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:10.669457   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:10.683541   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:10.683807   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:10.896493   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:11.168933   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:11.183346   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:11.184965   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:11.397290   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:11.669440   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:11.682915   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:11.684471   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:11.873117   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:11.896880   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:12.462502   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:12.462504   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:12.462546   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:12.462815   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:12.669512   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:12.682339   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:12.682757   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:12.896921   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:13.169375   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:13.182937   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:13.183252   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:13.396471   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:13.669340   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:13.682743   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:13.683149   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:13.897633   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:14.169741   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:14.183149   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:14.183611   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:14.373233   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:14.397241   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:14.669880   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:14.684196   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:14.685153   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:14.897486   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:15.168735   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:15.182856   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:15.183768   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:15.396668   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:15.669109   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:15.682395   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:15.683471   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:15.896837   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:16.168703   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:16.183373   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:16.185053   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:16.665051   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:16.676846   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:16.766323   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:16.766619   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:16.766726   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:16.901939   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:17.172656   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:17.182759   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:17.182930   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:17.396937   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:17.670486   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:17.683357   18990 kapi.go:107] duration metric: took 35.004012687s to wait for kubernetes.io/minikube-addons=registry ...
	I0829 18:57:17.683499   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:17.896856   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:18.169838   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:18.181982   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:18.398377   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:18.669201   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:18.682926   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:18.873544   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:18.897487   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:19.169391   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:19.182271   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:19.397321   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:19.671702   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:19.683428   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:19.896931   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:20.169472   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:20.184524   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:20.396986   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:20.669107   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:20.682955   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:20.874152   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:20.897581   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:21.169129   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:21.183119   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:21.397077   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:21.670237   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:21.682417   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:21.896731   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:22.169136   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:22.183327   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:22.399358   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:22.669025   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:22.682684   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:22.897209   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:23.168554   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:23.183317   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:23.376737   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:23.398862   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:23.669638   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:23.684290   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:23.896788   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:24.168867   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:24.182641   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:24.397360   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:24.669632   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:24.682814   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:24.896660   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:25.169124   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:25.182227   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:25.397300   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:25.669095   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:25.682571   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:25.877447   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:25.897875   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:26.171514   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:26.182786   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:26.399838   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:26.671735   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:26.683905   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:26.899821   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:27.169555   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:27.182826   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:27.397336   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:27.740885   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:27.742666   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:27.880815   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:27.897328   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:28.168851   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:28.182865   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:28.396062   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:28.669492   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:28.683095   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:28.897589   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:29.169448   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:29.183980   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:29.397702   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:29.670608   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:29.773227   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:29.897807   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:30.169552   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:30.182629   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:30.373870   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:30.396379   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:30.669796   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:30.683989   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:30.897362   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:31.174101   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:31.183759   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:31.396966   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:31.753369   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:31.770022   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:31.897431   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:32.169668   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:32.182893   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:32.374522   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:32.397648   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:32.670261   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:32.685779   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:32.901809   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:33.169762   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:33.184838   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:33.397320   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:33.669836   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:33.681530   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:33.896850   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:34.169041   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:34.182892   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:34.396541   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:34.669142   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:34.683175   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:34.878877   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:34.898182   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:35.169747   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:35.183483   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:35.398901   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:35.670294   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:35.685029   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:35.902939   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:36.171155   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:36.183010   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:36.398954   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:36.669195   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:36.682801   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:36.897585   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:37.168576   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:37.182987   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:37.374713   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:37.397360   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:37.668592   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:37.683096   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:37.896428   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:38.169266   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:38.183407   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:38.396482   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:38.670980   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:38.690197   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:38.896964   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:39.170158   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:39.183345   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:39.407490   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:39.669563   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:39.682556   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:39.874581   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:39.897965   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:40.169903   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:40.183693   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:40.397585   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:40.669944   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:40.698528   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:41.329496   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:41.330614   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:41.331037   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:41.398345   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:41.669869   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:41.682009   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:41.876524   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:41.900286   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:42.169633   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:42.183277   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:42.397178   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:42.669713   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:42.683223   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:42.897441   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:43.169982   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:43.182572   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:43.398170   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:43.670150   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:43.682336   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:43.896982   18990 kapi.go:107] duration metric: took 59.504499728s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0829 18:57:44.169788   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:44.181970   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:44.374399   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:44.670424   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:44.683646   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:45.169286   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:45.182897   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:45.669754   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:45.683182   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:46.170200   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:46.182590   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:46.669378   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:46.682597   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:46.873706   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:47.169378   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:47.183205   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:47.669917   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:47.681862   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:48.170226   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:48.182041   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:48.668676   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:48.682964   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:48.875193   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:49.179977   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:49.188747   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:49.669429   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:49.682463   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:50.169368   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:50.183100   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:50.669811   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:50.683376   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:51.169326   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:51.182850   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:51.373942   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:52.006081   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:52.006844   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:52.170628   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:52.181892   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:52.669274   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:52.682776   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:53.169297   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:53.183257   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:53.374184   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:53.670600   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:53.682938   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:54.170077   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:54.182248   18990 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:54.670362   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:54.682906   18990 kapi.go:107] duration metric: took 1m12.004398431s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0829 18:57:55.191112   18990 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:57:55.678304   18990 kapi.go:107] duration metric: took 1m9.512783124s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0829 18:57:55.680462   18990 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-344587 cluster.
	I0829 18:57:55.681796   18990 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0829 18:57:55.683065   18990 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0829 18:57:55.684301   18990 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, cloud-spanner, ingress-dns, inspektor-gadget, helm-tiller, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0829 18:57:55.685410   18990 addons.go:510] duration metric: took 1m22.026796458s for enable addons: enabled=[nvidia-device-plugin default-storageclass cloud-spanner ingress-dns inspektor-gadget helm-tiller storage-provisioner metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0829 18:57:55.873642   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:57.873758   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:58:00.374030   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:58:02.410643   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:58:04.873737   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:58:07.374926   18990 pod_ready.go:103] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"False"
	I0829 18:58:09.873926   18990 pod_ready.go:93] pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace has status "Ready":"True"
	I0829 18:58:09.873948   18990 pod_ready.go:82] duration metric: took 1m18.506392284s for pod "metrics-server-8988944d9-9tplt" in "kube-system" namespace to be "Ready" ...
	I0829 18:58:09.873961   18990 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-z559z" in "kube-system" namespace to be "Ready" ...
	I0829 18:58:09.879351   18990 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-z559z" in "kube-system" namespace has status "Ready":"True"
	I0829 18:58:09.879368   18990 pod_ready.go:82] duration metric: took 5.400164ms for pod "nvidia-device-plugin-daemonset-z559z" in "kube-system" namespace to be "Ready" ...
	I0829 18:58:09.879384   18990 pod_ready.go:39] duration metric: took 1m27.157397179s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:58:09.879399   18990 api_server.go:52] waiting for apiserver process to appear ...
	I0829 18:58:09.879429   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 18:58:09.879478   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 18:58:09.930035   18990 cri.go:89] found id: "ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24"
	I0829 18:58:09.930059   18990 cri.go:89] found id: ""
	I0829 18:58:09.930070   18990 logs.go:276] 1 containers: [ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24]
	I0829 18:58:09.930131   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:09.934705   18990 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 18:58:09.934774   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 18:58:09.974110   18990 cri.go:89] found id: "3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459"
	I0829 18:58:09.974133   18990 cri.go:89] found id: ""
	I0829 18:58:09.974142   18990 logs.go:276] 1 containers: [3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459]
	I0829 18:58:09.974198   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:09.978660   18990 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 18:58:09.978721   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 18:58:10.017468   18990 cri.go:89] found id: "edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1"
	I0829 18:58:10.017489   18990 cri.go:89] found id: ""
	I0829 18:58:10.017499   18990 logs.go:276] 1 containers: [edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1]
	I0829 18:58:10.017546   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:10.022568   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 18:58:10.022633   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 18:58:10.066173   18990 cri.go:89] found id: "46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef"
	I0829 18:58:10.066193   18990 cri.go:89] found id: ""
	I0829 18:58:10.066200   18990 logs.go:276] 1 containers: [46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef]
	I0829 18:58:10.066254   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:10.071876   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 18:58:10.071927   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 18:58:10.113139   18990 cri.go:89] found id: "e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565"
	I0829 18:58:10.113158   18990 cri.go:89] found id: ""
	I0829 18:58:10.113164   18990 logs.go:276] 1 containers: [e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565]
	I0829 18:58:10.113210   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:10.117643   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 18:58:10.117707   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 18:58:10.173282   18990 cri.go:89] found id: "79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24"
	I0829 18:58:10.173301   18990 cri.go:89] found id: ""
	I0829 18:58:10.173308   18990 logs.go:276] 1 containers: [79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24]
	I0829 18:58:10.173350   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:10.177760   18990 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 18:58:10.177826   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 18:58:10.219010   18990 cri.go:89] found id: ""
	I0829 18:58:10.219040   18990 logs.go:276] 0 containers: []
	W0829 18:58:10.219050   18990 logs.go:278] No container was found matching "kindnet"
	I0829 18:58:10.219062   18990 logs.go:123] Gathering logs for kube-apiserver [ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24] ...
	I0829 18:58:10.219078   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24"
	I0829 18:58:10.277241   18990 logs.go:123] Gathering logs for kube-proxy [e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565] ...
	I0829 18:58:10.277270   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565"
	I0829 18:58:10.323859   18990 logs.go:123] Gathering logs for kube-controller-manager [79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24] ...
	I0829 18:58:10.323886   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24"
	I0829 18:58:10.385553   18990 logs.go:123] Gathering logs for container status ...
	I0829 18:58:10.385580   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 18:58:10.435083   18990 logs.go:123] Gathering logs for CRI-O ...
	I0829 18:58:10.435110   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 18:58:11.402687   18990 logs.go:123] Gathering logs for kubelet ...
	I0829 18:58:11.402729   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 18:58:11.453024   18990 logs.go:138] Found kubelet problem: Aug 29 18:56:33 addons-344587 kubelet[1210]: W0829 18:56:33.791012    1210 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-344587" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-344587' and this object
	W0829 18:58:11.453296   18990 logs.go:138] Found kubelet problem: Aug 29 18:56:33 addons-344587 kubelet[1210]: E0829 18:56:33.791070    1210 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-344587\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-344587' and this object" logger="UnhandledError"
	I0829 18:58:11.488836   18990 logs.go:123] Gathering logs for dmesg ...
	I0829 18:58:11.488870   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 18:58:11.504148   18990 logs.go:123] Gathering logs for describe nodes ...
	I0829 18:58:11.504172   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 18:58:11.643790   18990 logs.go:123] Gathering logs for etcd [3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459] ...
	I0829 18:58:11.643818   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459"
	I0829 18:58:11.726389   18990 logs.go:123] Gathering logs for coredns [edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1] ...
	I0829 18:58:11.726425   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1"
	I0829 18:58:11.766070   18990 logs.go:123] Gathering logs for kube-scheduler [46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef] ...
	I0829 18:58:11.766094   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef"
	I0829 18:58:11.811796   18990 out.go:358] Setting ErrFile to fd 2...
	I0829 18:58:11.811817   18990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 18:58:11.811865   18990 out.go:270] X Problems detected in kubelet:
	W0829 18:58:11.811879   18990 out.go:270]   Aug 29 18:56:33 addons-344587 kubelet[1210]: W0829 18:56:33.791012    1210 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-344587" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-344587' and this object
	W0829 18:58:11.811890   18990 out.go:270]   Aug 29 18:56:33 addons-344587 kubelet[1210]: E0829 18:56:33.791070    1210 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-344587\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-344587' and this object" logger="UnhandledError"
	I0829 18:58:11.811902   18990 out.go:358] Setting ErrFile to fd 2...
	I0829 18:58:11.811911   18990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:58:21.813233   18990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:58:21.832761   18990 api_server.go:72] duration metric: took 1m48.174160591s to wait for apiserver process to appear ...
	I0829 18:58:21.832788   18990 api_server.go:88] waiting for apiserver healthz status ...
	I0829 18:58:21.832817   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 18:58:21.832862   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 18:58:21.873058   18990 cri.go:89] found id: "ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24"
	I0829 18:58:21.873083   18990 cri.go:89] found id: ""
	I0829 18:58:21.873093   18990 logs.go:276] 1 containers: [ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24]
	I0829 18:58:21.873154   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:21.877320   18990 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 18:58:21.877374   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 18:58:21.916655   18990 cri.go:89] found id: "3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459"
	I0829 18:58:21.916684   18990 cri.go:89] found id: ""
	I0829 18:58:21.916692   18990 logs.go:276] 1 containers: [3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459]
	I0829 18:58:21.916736   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:21.920999   18990 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 18:58:21.921045   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 18:58:21.965578   18990 cri.go:89] found id: "edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1"
	I0829 18:58:21.965606   18990 cri.go:89] found id: ""
	I0829 18:58:21.965615   18990 logs.go:276] 1 containers: [edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1]
	I0829 18:58:21.965669   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:21.969756   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 18:58:21.969822   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 18:58:22.017458   18990 cri.go:89] found id: "46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef"
	I0829 18:58:22.017480   18990 cri.go:89] found id: ""
	I0829 18:58:22.017491   18990 logs.go:276] 1 containers: [46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef]
	I0829 18:58:22.017549   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:22.021887   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 18:58:22.021956   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 18:58:22.059660   18990 cri.go:89] found id: "e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565"
	I0829 18:58:22.059684   18990 cri.go:89] found id: ""
	I0829 18:58:22.059693   18990 logs.go:276] 1 containers: [e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565]
	I0829 18:58:22.059748   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:22.063706   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 18:58:22.063759   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 18:58:22.099570   18990 cri.go:89] found id: "79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24"
	I0829 18:58:22.099596   18990 cri.go:89] found id: ""
	I0829 18:58:22.099606   18990 logs.go:276] 1 containers: [79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24]
	I0829 18:58:22.099660   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:22.103920   18990 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 18:58:22.103979   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 18:58:22.140807   18990 cri.go:89] found id: ""
	I0829 18:58:22.140837   18990 logs.go:276] 0 containers: []
	W0829 18:58:22.140849   18990 logs.go:278] No container was found matching "kindnet"
	I0829 18:58:22.140860   18990 logs.go:123] Gathering logs for kube-controller-manager [79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24] ...
	I0829 18:58:22.140874   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24"
	I0829 18:58:22.204452   18990 logs.go:123] Gathering logs for CRI-O ...
	I0829 18:58:22.204483   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 18:58:23.279114   18990 logs.go:123] Gathering logs for describe nodes ...
	I0829 18:58:23.279161   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 18:58:23.396916   18990 logs.go:123] Gathering logs for kube-apiserver [ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24] ...
	I0829 18:58:23.396950   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24"
	I0829 18:58:23.445310   18990 logs.go:123] Gathering logs for etcd [3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459] ...
	I0829 18:58:23.445352   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459"
	I0829 18:58:23.513636   18990 logs.go:123] Gathering logs for coredns [edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1] ...
	I0829 18:58:23.513664   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1"
	I0829 18:58:23.554990   18990 logs.go:123] Gathering logs for kube-scheduler [46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef] ...
	I0829 18:58:23.555020   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef"
	I0829 18:58:23.601432   18990 logs.go:123] Gathering logs for kube-proxy [e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565] ...
	I0829 18:58:23.601464   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565"
	I0829 18:58:23.639619   18990 logs.go:123] Gathering logs for kubelet ...
	I0829 18:58:23.639647   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 18:58:23.690102   18990 logs.go:138] Found kubelet problem: Aug 29 18:56:33 addons-344587 kubelet[1210]: W0829 18:56:33.791012    1210 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-344587" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-344587' and this object
	W0829 18:58:23.690271   18990 logs.go:138] Found kubelet problem: Aug 29 18:56:33 addons-344587 kubelet[1210]: E0829 18:56:33.791070    1210 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-344587\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-344587' and this object" logger="UnhandledError"
	I0829 18:58:23.728666   18990 logs.go:123] Gathering logs for dmesg ...
	I0829 18:58:23.728701   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 18:58:23.743456   18990 logs.go:123] Gathering logs for container status ...
	I0829 18:58:23.743482   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 18:58:23.796892   18990 out.go:358] Setting ErrFile to fd 2...
	I0829 18:58:23.796919   18990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 18:58:23.796981   18990 out.go:270] X Problems detected in kubelet:
	W0829 18:58:23.796994   18990 out.go:270]   Aug 29 18:56:33 addons-344587 kubelet[1210]: W0829 18:56:33.791012    1210 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-344587" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-344587' and this object
	W0829 18:58:23.797004   18990 out.go:270]   Aug 29 18:56:33 addons-344587 kubelet[1210]: E0829 18:56:33.791070    1210 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-344587\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-344587' and this object" logger="UnhandledError"
	I0829 18:58:23.797016   18990 out.go:358] Setting ErrFile to fd 2...
	I0829 18:58:23.797026   18990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:58:33.797922   18990 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I0829 18:58:33.802928   18990 api_server.go:279] https://192.168.39.172:8443/healthz returned 200:
	ok
	I0829 18:58:33.803830   18990 api_server.go:141] control plane version: v1.31.0
	I0829 18:58:33.803850   18990 api_server.go:131] duration metric: took 11.971056831s to wait for apiserver health ...
	I0829 18:58:33.803858   18990 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 18:58:33.803876   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 18:58:33.803917   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 18:58:33.854225   18990 cri.go:89] found id: "ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24"
	I0829 18:58:33.854244   18990 cri.go:89] found id: ""
	I0829 18:58:33.854250   18990 logs.go:276] 1 containers: [ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24]
	I0829 18:58:33.854290   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:33.858238   18990 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 18:58:33.858286   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 18:58:33.900025   18990 cri.go:89] found id: "3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459"
	I0829 18:58:33.900045   18990 cri.go:89] found id: ""
	I0829 18:58:33.900054   18990 logs.go:276] 1 containers: [3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459]
	I0829 18:58:33.900094   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:33.904590   18990 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 18:58:33.904641   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 18:58:33.942867   18990 cri.go:89] found id: "edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1"
	I0829 18:58:33.942888   18990 cri.go:89] found id: ""
	I0829 18:58:33.942895   18990 logs.go:276] 1 containers: [edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1]
	I0829 18:58:33.942953   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:33.947338   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 18:58:33.947388   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 18:58:33.991266   18990 cri.go:89] found id: "46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef"
	I0829 18:58:33.991285   18990 cri.go:89] found id: ""
	I0829 18:58:33.991292   18990 logs.go:276] 1 containers: [46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef]
	I0829 18:58:33.991334   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:33.995550   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 18:58:33.995601   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 18:58:34.034277   18990 cri.go:89] found id: "e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565"
	I0829 18:58:34.034294   18990 cri.go:89] found id: ""
	I0829 18:58:34.034302   18990 logs.go:276] 1 containers: [e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565]
	I0829 18:58:34.034341   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:34.038466   18990 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 18:58:34.038546   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 18:58:34.078562   18990 cri.go:89] found id: "79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24"
	I0829 18:58:34.078579   18990 cri.go:89] found id: ""
	I0829 18:58:34.078586   18990 logs.go:276] 1 containers: [79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24]
	I0829 18:58:34.078630   18990 ssh_runner.go:195] Run: which crictl
	I0829 18:58:34.083366   18990 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 18:58:34.083423   18990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 18:58:34.145061   18990 cri.go:89] found id: ""
	I0829 18:58:34.145090   18990 logs.go:276] 0 containers: []
	W0829 18:58:34.145099   18990 logs.go:278] No container was found matching "kindnet"
	I0829 18:58:34.145106   18990 logs.go:123] Gathering logs for kubelet ...
	I0829 18:58:34.145117   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 18:58:34.193492   18990 logs.go:138] Found kubelet problem: Aug 29 18:56:33 addons-344587 kubelet[1210]: W0829 18:56:33.791012    1210 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-344587" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-344587' and this object
	W0829 18:58:34.193696   18990 logs.go:138] Found kubelet problem: Aug 29 18:56:33 addons-344587 kubelet[1210]: E0829 18:56:33.791070    1210 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-344587\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-344587' and this object" logger="UnhandledError"
	I0829 18:58:34.230073   18990 logs.go:123] Gathering logs for kube-scheduler [46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef] ...
	I0829 18:58:34.230109   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef"
	I0829 18:58:34.281725   18990 logs.go:123] Gathering logs for kube-proxy [e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565] ...
	I0829 18:58:34.281758   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565"
	I0829 18:58:34.325201   18990 logs.go:123] Gathering logs for container status ...
	I0829 18:58:34.325228   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 18:58:34.371370   18990 logs.go:123] Gathering logs for CRI-O ...
	I0829 18:58:34.371400   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 18:58:35.159659   18990 logs.go:123] Gathering logs for dmesg ...
	I0829 18:58:35.159722   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 18:58:35.175376   18990 logs.go:123] Gathering logs for describe nodes ...
	I0829 18:58:35.175403   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 18:58:35.302779   18990 logs.go:123] Gathering logs for kube-apiserver [ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24] ...
	I0829 18:58:35.302810   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24"
	I0829 18:58:35.362682   18990 logs.go:123] Gathering logs for etcd [3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459] ...
	I0829 18:58:35.362711   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459"
	I0829 18:58:35.435174   18990 logs.go:123] Gathering logs for coredns [edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1] ...
	I0829 18:58:35.435207   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1"
	I0829 18:58:35.475282   18990 logs.go:123] Gathering logs for kube-controller-manager [79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24] ...
	I0829 18:58:35.475310   18990 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24"
	I0829 18:58:35.539640   18990 out.go:358] Setting ErrFile to fd 2...
	I0829 18:58:35.539666   18990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 18:58:35.539716   18990 out.go:270] X Problems detected in kubelet:
	W0829 18:58:35.539724   18990 out.go:270]   Aug 29 18:56:33 addons-344587 kubelet[1210]: W0829 18:56:33.791012    1210 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-344587" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-344587' and this object
	W0829 18:58:35.539735   18990 out.go:270]   Aug 29 18:56:33 addons-344587 kubelet[1210]: E0829 18:56:33.791070    1210 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-344587\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-344587' and this object" logger="UnhandledError"
	I0829 18:58:35.539748   18990 out.go:358] Setting ErrFile to fd 2...
	I0829 18:58:35.539754   18990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:58:45.550232   18990 system_pods.go:59] 18 kube-system pods found
	I0829 18:58:45.550261   18990 system_pods.go:61] "coredns-6f6b679f8f-t9nhw" [01782eed-98db-4768-8ab6-bd429fe58305] Running
	I0829 18:58:45.550266   18990 system_pods.go:61] "csi-hostpath-attacher-0" [318ff00f-e5be-4029-b58b-30185cb48a7f] Running
	I0829 18:58:45.550269   18990 system_pods.go:61] "csi-hostpath-resizer-0" [ba8fc44d-cd38-469f-8d42-7aedd5d81a06] Running
	I0829 18:58:45.550272   18990 system_pods.go:61] "csi-hostpathplugin-96vz6" [207fbe26-1d1e-48c7-8bfd-4621264e0739] Running
	I0829 18:58:45.550275   18990 system_pods.go:61] "etcd-addons-344587" [332f8ecf-d239-4d45-b8c2-e023c3849b2b] Running
	I0829 18:58:45.550278   18990 system_pods.go:61] "kube-apiserver-addons-344587" [cec380f4-ded8-4496-b6c5-54ebeeecb720] Running
	I0829 18:58:45.550281   18990 system_pods.go:61] "kube-controller-manager-addons-344587" [4812d16d-522f-44e2-b353-798732857218] Running
	I0829 18:58:45.550284   18990 system_pods.go:61] "kube-ingress-dns-minikube" [2aeaeabc-ac3f-4f8a-88ee-84fe5d623dd6] Running
	I0829 18:58:45.550286   18990 system_pods.go:61] "kube-proxy-lgcxw" [0be1dddc-793d-471e-aa16-9752951fb72a] Running
	I0829 18:58:45.550289   18990 system_pods.go:61] "kube-scheduler-addons-344587" [c36a46ec-4466-46f5-ba95-40110040eb06] Running
	I0829 18:58:45.550291   18990 system_pods.go:61] "metrics-server-8988944d9-9tplt" [427d61c8-9ff3-4718-9faf-896d20af6cdc] Running
	I0829 18:58:45.550295   18990 system_pods.go:61] "nvidia-device-plugin-daemonset-z559z" [f30c9660-ea3d-40c2-9842-bcf8bb18c0b6] Running
	I0829 18:58:45.550297   18990 system_pods.go:61] "registry-6fb4cdfc84-dmlc6" [074412f0-2988-4497-a2bb-abd86ddc18ab] Running
	I0829 18:58:45.550300   18990 system_pods.go:61] "registry-proxy-x5bqm" [45f795aa-aca5-41b5-a455-89b285ce9531] Running
	I0829 18:58:45.550303   18990 system_pods.go:61] "snapshot-controller-56fcc65765-8fbbn" [ed961d54-d7a4-485f-bb8e-e7195ed4e80e] Running
	I0829 18:58:45.550307   18990 system_pods.go:61] "snapshot-controller-56fcc65765-gn5lq" [bf5c7495-59fd-4151-abce-7cf6072e995e] Running
	I0829 18:58:45.550309   18990 system_pods.go:61] "storage-provisioner" [14e72aaf-6cd6-4740-a9d5-e4a739fed914] Running
	I0829 18:58:45.550312   18990 system_pods.go:61] "tiller-deploy-b48cc5f79-bxws5" [d2380d68-348a-4dc1-8c40-1a4e9fa6ab04] Running
	I0829 18:58:45.550318   18990 system_pods.go:74] duration metric: took 11.746455029s to wait for pod list to return data ...
	I0829 18:58:45.550328   18990 default_sa.go:34] waiting for default service account to be created ...
	I0829 18:58:45.553072   18990 default_sa.go:45] found service account: "default"
	I0829 18:58:45.553088   18990 default_sa.go:55] duration metric: took 2.755882ms for default service account to be created ...
	I0829 18:58:45.553095   18990 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 18:58:45.559715   18990 system_pods.go:86] 18 kube-system pods found
	I0829 18:58:45.559734   18990 system_pods.go:89] "coredns-6f6b679f8f-t9nhw" [01782eed-98db-4768-8ab6-bd429fe58305] Running
	I0829 18:58:45.559740   18990 system_pods.go:89] "csi-hostpath-attacher-0" [318ff00f-e5be-4029-b58b-30185cb48a7f] Running
	I0829 18:58:45.559744   18990 system_pods.go:89] "csi-hostpath-resizer-0" [ba8fc44d-cd38-469f-8d42-7aedd5d81a06] Running
	I0829 18:58:45.559748   18990 system_pods.go:89] "csi-hostpathplugin-96vz6" [207fbe26-1d1e-48c7-8bfd-4621264e0739] Running
	I0829 18:58:45.559751   18990 system_pods.go:89] "etcd-addons-344587" [332f8ecf-d239-4d45-b8c2-e023c3849b2b] Running
	I0829 18:58:45.559756   18990 system_pods.go:89] "kube-apiserver-addons-344587" [cec380f4-ded8-4496-b6c5-54ebeeecb720] Running
	I0829 18:58:45.559760   18990 system_pods.go:89] "kube-controller-manager-addons-344587" [4812d16d-522f-44e2-b353-798732857218] Running
	I0829 18:58:45.559764   18990 system_pods.go:89] "kube-ingress-dns-minikube" [2aeaeabc-ac3f-4f8a-88ee-84fe5d623dd6] Running
	I0829 18:58:45.559767   18990 system_pods.go:89] "kube-proxy-lgcxw" [0be1dddc-793d-471e-aa16-9752951fb72a] Running
	I0829 18:58:45.559771   18990 system_pods.go:89] "kube-scheduler-addons-344587" [c36a46ec-4466-46f5-ba95-40110040eb06] Running
	I0829 18:58:45.559774   18990 system_pods.go:89] "metrics-server-8988944d9-9tplt" [427d61c8-9ff3-4718-9faf-896d20af6cdc] Running
	I0829 18:58:45.559778   18990 system_pods.go:89] "nvidia-device-plugin-daemonset-z559z" [f30c9660-ea3d-40c2-9842-bcf8bb18c0b6] Running
	I0829 18:58:45.559781   18990 system_pods.go:89] "registry-6fb4cdfc84-dmlc6" [074412f0-2988-4497-a2bb-abd86ddc18ab] Running
	I0829 18:58:45.559785   18990 system_pods.go:89] "registry-proxy-x5bqm" [45f795aa-aca5-41b5-a455-89b285ce9531] Running
	I0829 18:58:45.559791   18990 system_pods.go:89] "snapshot-controller-56fcc65765-8fbbn" [ed961d54-d7a4-485f-bb8e-e7195ed4e80e] Running
	I0829 18:58:45.559794   18990 system_pods.go:89] "snapshot-controller-56fcc65765-gn5lq" [bf5c7495-59fd-4151-abce-7cf6072e995e] Running
	I0829 18:58:45.559797   18990 system_pods.go:89] "storage-provisioner" [14e72aaf-6cd6-4740-a9d5-e4a739fed914] Running
	I0829 18:58:45.559801   18990 system_pods.go:89] "tiller-deploy-b48cc5f79-bxws5" [d2380d68-348a-4dc1-8c40-1a4e9fa6ab04] Running
	I0829 18:58:45.559806   18990 system_pods.go:126] duration metric: took 6.706766ms to wait for k8s-apps to be running ...
	I0829 18:58:45.559815   18990 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 18:58:45.559853   18990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:58:45.577199   18990 system_svc.go:56] duration metric: took 17.376357ms WaitForService to wait for kubelet
	I0829 18:58:45.577228   18990 kubeadm.go:582] duration metric: took 2m11.91863045s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:58:45.577249   18990 node_conditions.go:102] verifying NodePressure condition ...
	I0829 18:58:45.580335   18990 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 18:58:45.580362   18990 node_conditions.go:123] node cpu capacity is 2
	I0829 18:58:45.580377   18990 node_conditions.go:105] duration metric: took 3.122527ms to run NodePressure ...
	I0829 18:58:45.580391   18990 start.go:241] waiting for startup goroutines ...
	I0829 18:58:45.580403   18990 start.go:246] waiting for cluster config update ...
	I0829 18:58:45.580427   18990 start.go:255] writing updated cluster config ...
	I0829 18:58:45.580716   18990 ssh_runner.go:195] Run: rm -f paused
	I0829 18:58:45.628072   18990 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 18:58:45.630291   18990 out.go:177] * Done! kubectl is now configured to use "addons-344587" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 29 19:12:40 addons-344587 crio[658]: time="2024-08-29 19:12:40.166172192Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9522214d-ebd9-47be-91e6-9636b5b4ce90 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:12:40 addons-344587 crio[658]: time="2024-08-29 19:12:40.167554040Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7149295c-df45-4c33-b71e-d8ab0312f181 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:12:40 addons-344587 crio[658]: time="2024-08-29 19:12:40.168783588Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958760168753398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7149295c-df45-4c33-b71e-d8ab0312f181 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:12:40 addons-344587 crio[658]: time="2024-08-29 19:12:40.169373090Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b527205-d9b1-4c1d-8f82-69b50a99700c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:12:40 addons-344587 crio[658]: time="2024-08-29 19:12:40.169612160Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b527205-d9b1-4c1d-8f82-69b50a99700c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:12:40 addons-344587 crio[658]: time="2024-08-29 19:12:40.170251206Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5cd8313ff37bae72eced3f03f75a5b3dda5256094e6c9637614a07540de04d6b,PodSandboxId:31a4794e76c5a6d4324d26b7e11b8620a4b1fbb349a6204ccff8f657aac6e767,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724958626537636688,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-4lddf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1287ae1-fe54-458a-97b2-472886127905,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89ab3a824e7e135eccb2feebf6514044e173d635da6a72ba4865fa34b9d7b554,PodSandboxId:9f57bccc4a62cb14a1b0d1f9d9e7d3383fc17afb7d58885da58d33d9aaba7e6b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724958486293343065,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 315329e4-6bc2-4164-a37f-2d9d7857eba1,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c8e7bcbfebef995192df5d64f603c9d08c91b1a799ccb037398d00858d33ba,PodSandboxId:fe391f299e153d6c0972dbdf457c92e1ad975a4e3574b15f1ba653832b929f55,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724957874945557865,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-m8795,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 3289f49d-61c4-4693-818b-5a8e73d95410,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54164e21bd9a6b72fe31085cb8cbe33df2bbc8670c01312f597d476f78b5d1c,PodSandboxId:05bebeb94a32a8d55a95ed5362201cbf7e480ead93a4fbbbeeee251aac433a08,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724957825221794572,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-8988944d9-9tplt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427d61c8-9ff3-4718-9faf-896d20af6cdc,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a0a245e481d700614dae7bfc6d7d4bbe9f074615c88e1373f761abb63e08cc,PodSandboxId:caa68615ea5869f55238af371cf096755b420e33837f56a5391355cc5270a453,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1724957800386945011,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e72aaf-6cd6-4740-a9d5-e4a739fed914,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1,PodSandboxId:78300c884569d8c916441667591ef1a6cdfdc769a616ccf684cd3aeb6ee173a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:17249577979
14552214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t9nhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01782eed-98db-4768-8ab6-bd429fe58305,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565,PodSandboxId:2f0f516b497dee578fa502336c368d6a64299cb73064fc27e94ba33dcd0c9623,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560
a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724957793494579868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lgcxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0be1dddc-793d-471e-aa16-9752951fb72a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef,PodSandboxId:bc4dfc643a4f95574b0c4d436e0af5e463ff66fcda536cf28cb9fd9980b55ae3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724957783507566907,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 433e7620b7da2027dd73dc3bddb2f997,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24,PodSandboxId:f9feaafb78b8d9b6c588088d4158b1b980da872d4034937746daf0b1ac5998c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724957783501034123,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df35d1b995ba56a5ea532995ddbeb880,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459,PodSandboxId:5b90ade16a1ec3848984ac4ccc079dbc33282af9bb7be159d955ed993979dd7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSp
ecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724957783509496209,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3105eec1aeed59eaefbe2b389301917,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24,PodSandboxId:948b38ffb05be778e3450b635de34556972385a53111e037a04d34383f52c377,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724957783431784942,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba40dc401b574376e18cc2969d87be7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b527205-d9b1-4c1d-8f82-69b50a99700c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:12:40 addons-344587 crio[658]: time="2024-08-29 19:12:40.203539017Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=13d5be6f-5493-47d2-a3c8-a979a200a1b5 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:12:40 addons-344587 crio[658]: time="2024-08-29 19:12:40.203632180Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=13d5be6f-5493-47d2-a3c8-a979a200a1b5 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:12:40 addons-344587 crio[658]: time="2024-08-29 19:12:40.204571217Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6dca1902-022c-4ab3-a176-31c034ee00ea name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:12:40 addons-344587 crio[658]: time="2024-08-29 19:12:40.205936358Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958760205907950,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6dca1902-022c-4ab3-a176-31c034ee00ea name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:12:40 addons-344587 crio[658]: time="2024-08-29 19:12:40.206578842Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d4d2dccc-1831-450e-a010-1b24c827f4e2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:12:40 addons-344587 crio[658]: time="2024-08-29 19:12:40.206651571Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d4d2dccc-1831-450e-a010-1b24c827f4e2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:12:40 addons-344587 crio[658]: time="2024-08-29 19:12:40.206968460Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5cd8313ff37bae72eced3f03f75a5b3dda5256094e6c9637614a07540de04d6b,PodSandboxId:31a4794e76c5a6d4324d26b7e11b8620a4b1fbb349a6204ccff8f657aac6e767,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724958626537636688,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-4lddf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1287ae1-fe54-458a-97b2-472886127905,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89ab3a824e7e135eccb2feebf6514044e173d635da6a72ba4865fa34b9d7b554,PodSandboxId:9f57bccc4a62cb14a1b0d1f9d9e7d3383fc17afb7d58885da58d33d9aaba7e6b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724958486293343065,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 315329e4-6bc2-4164-a37f-2d9d7857eba1,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c8e7bcbfebef995192df5d64f603c9d08c91b1a799ccb037398d00858d33ba,PodSandboxId:fe391f299e153d6c0972dbdf457c92e1ad975a4e3574b15f1ba653832b929f55,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724957874945557865,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-m8795,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 3289f49d-61c4-4693-818b-5a8e73d95410,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54164e21bd9a6b72fe31085cb8cbe33df2bbc8670c01312f597d476f78b5d1c,PodSandboxId:05bebeb94a32a8d55a95ed5362201cbf7e480ead93a4fbbbeeee251aac433a08,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724957825221794572,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-8988944d9-9tplt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427d61c8-9ff3-4718-9faf-896d20af6cdc,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a0a245e481d700614dae7bfc6d7d4bbe9f074615c88e1373f761abb63e08cc,PodSandboxId:caa68615ea5869f55238af371cf096755b420e33837f56a5391355cc5270a453,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1724957800386945011,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e72aaf-6cd6-4740-a9d5-e4a739fed914,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1,PodSandboxId:78300c884569d8c916441667591ef1a6cdfdc769a616ccf684cd3aeb6ee173a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:17249577979
14552214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t9nhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01782eed-98db-4768-8ab6-bd429fe58305,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565,PodSandboxId:2f0f516b497dee578fa502336c368d6a64299cb73064fc27e94ba33dcd0c9623,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560
a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724957793494579868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lgcxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0be1dddc-793d-471e-aa16-9752951fb72a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef,PodSandboxId:bc4dfc643a4f95574b0c4d436e0af5e463ff66fcda536cf28cb9fd9980b55ae3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724957783507566907,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 433e7620b7da2027dd73dc3bddb2f997,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24,PodSandboxId:f9feaafb78b8d9b6c588088d4158b1b980da872d4034937746daf0b1ac5998c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724957783501034123,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df35d1b995ba56a5ea532995ddbeb880,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459,PodSandboxId:5b90ade16a1ec3848984ac4ccc079dbc33282af9bb7be159d955ed993979dd7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSp
ecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724957783509496209,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3105eec1aeed59eaefbe2b389301917,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24,PodSandboxId:948b38ffb05be778e3450b635de34556972385a53111e037a04d34383f52c377,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724957783431784942,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba40dc401b574376e18cc2969d87be7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d4d2dccc-1831-450e-a010-1b24c827f4e2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:12:40 addons-344587 crio[658]: time="2024-08-29 19:12:40.220418779Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=635560db-567b-4559-a9de-ce0bf8016180 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 29 19:12:40 addons-344587 crio[658]: time="2024-08-29 19:12:40.220909550Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:31a4794e76c5a6d4324d26b7e11b8620a4b1fbb349a6204ccff8f657aac6e767,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-4lddf,Uid:d1287ae1-fe54-458a-97b2-472886127905,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724958624734063891,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-4lddf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1287ae1-fe54-458a-97b2-472886127905,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:10:24.417505653Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9f57bccc4a62cb14a1b0d1f9d9e7d3383fc17afb7d58885da58d33d9aaba7e6b,Metadata:&PodSandboxMetadata{Name:nginx,Uid:315329e4-6bc2-4164-a37f-2d9d7857eba1,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1724958483481374720,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 315329e4-6bc2-4164-a37f-2d9d7857eba1,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:08:03.170774949Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1a023decc227ee65b8b8da3fe69d6546df1838e6644e15b849dbcebe2f06dc50,Metadata:&PodSandboxMetadata{Name:busybox,Uid:ea5b7a84-ebbb-47e9-92c8-ee98926439ae,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724957926228593336,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea5b7a84-ebbb-47e9-92c8-ee98926439ae,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T18:58:45.915867096Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fe391f299e153d6c09
72dbdf457c92e1ad975a4e3574b15f1ba653832b929f55,Metadata:&PodSandboxMetadata{Name:gcp-auth-89d5ffd79-m8795,Uid:3289f49d-61c4-4693-818b-5a8e73d95410,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724957870838592730,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-89d5ffd79-m8795,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 3289f49d-61c4-4693-818b-5a8e73d95410,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: 89d5ffd79,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T18:56:46.092314452Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:05bebeb94a32a8d55a95ed5362201cbf7e480ead93a4fbbbeeee251aac433a08,Metadata:&PodSandboxMetadata{Name:metrics-server-8988944d9-9tplt,Uid:427d61c8-9ff3-4718-9faf-896d20af6cdc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724957799569187178,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-ser
ver-8988944d9-9tplt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427d61c8-9ff3-4718-9faf-896d20af6cdc,k8s-app: metrics-server,pod-template-hash: 8988944d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T18:56:38.945003784Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:caa68615ea5869f55238af371cf096755b420e33837f56a5391355cc5270a453,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:14e72aaf-6cd6-4740-a9d5-e4a739fed914,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724957799097181863,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e72aaf-6cd6-4740-a9d5-e4a739fed914,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels
\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-29T18:56:38.478565215Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:78300c884569d8c916441667591ef1a6cdfdc769a616ccf684cd3aeb6ee173a1,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-t9nhw,Uid:01782eed-98db-4768-8ab6-bd429fe58305,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724957794955797634,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubern
etes.pod.name: coredns-6f6b679f8f-t9nhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01782eed-98db-4768-8ab6-bd429fe58305,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T18:56:33.746263334Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2f0f516b497dee578fa502336c368d6a64299cb73064fc27e94ba33dcd0c9623,Metadata:&PodSandboxMetadata{Name:kube-proxy-lgcxw,Uid:0be1dddc-793d-471e-aa16-9752951fb72a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724957793393483126,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-lgcxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0be1dddc-793d-471e-aa16-9752951fb72a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T18:56:33.081164990Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSand
box{Id:bc4dfc643a4f95574b0c4d436e0af5e463ff66fcda536cf28cb9fd9980b55ae3,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-344587,Uid:433e7620b7da2027dd73dc3bddb2f997,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724957783283008315,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 433e7620b7da2027dd73dc3bddb2f997,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 433e7620b7da2027dd73dc3bddb2f997,kubernetes.io/config.seen: 2024-08-29T18:56:22.799154155Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5b90ade16a1ec3848984ac4ccc079dbc33282af9bb7be159d955ed993979dd7b,Metadata:&PodSandboxMetadata{Name:etcd-addons-344587,Uid:b3105eec1aeed59eaefbe2b389301917,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724957783281860985,Labels:map[string]string{component: etcd,io.kubernetes.
container.name: POD,io.kubernetes.pod.name: etcd-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3105eec1aeed59eaefbe2b389301917,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.172:2379,kubernetes.io/config.hash: b3105eec1aeed59eaefbe2b389301917,kubernetes.io/config.seen: 2024-08-29T18:56:22.799148481Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f9feaafb78b8d9b6c588088d4158b1b980da872d4034937746daf0b1ac5998c9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-344587,Uid:df35d1b995ba56a5ea532995ddbeb880,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724957783271383913,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df35d1b995ba56a5ea532995ddbeb880,tier: control-plane,},Annotations:ma
p[string]string{kubernetes.io/config.hash: df35d1b995ba56a5ea532995ddbeb880,kubernetes.io/config.seen: 2024-08-29T18:56:22.799153331Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:948b38ffb05be778e3450b635de34556972385a53111e037a04d34383f52c377,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-344587,Uid:9ba40dc401b574376e18cc2969d87be7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724957783270086677,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba40dc401b574376e18cc2969d87be7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.172:8443,kubernetes.io/config.hash: 9ba40dc401b574376e18cc2969d87be7,kubernetes.io/config.seen: 2024-08-29T18:56:22.799151907Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/in
terceptors.go:74" id=635560db-567b-4559-a9de-ce0bf8016180 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 29 19:12:40 addons-344587 crio[658]: time="2024-08-29 19:12:40.221591476Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ce550941-c7a1-4e2b-a292-f0dda9d73e47 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:12:40 addons-344587 crio[658]: time="2024-08-29 19:12:40.221660478Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ce550941-c7a1-4e2b-a292-f0dda9d73e47 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:12:40 addons-344587 crio[658]: time="2024-08-29 19:12:40.221942281Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5cd8313ff37bae72eced3f03f75a5b3dda5256094e6c9637614a07540de04d6b,PodSandboxId:31a4794e76c5a6d4324d26b7e11b8620a4b1fbb349a6204ccff8f657aac6e767,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724958626537636688,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-4lddf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1287ae1-fe54-458a-97b2-472886127905,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89ab3a824e7e135eccb2feebf6514044e173d635da6a72ba4865fa34b9d7b554,PodSandboxId:9f57bccc4a62cb14a1b0d1f9d9e7d3383fc17afb7d58885da58d33d9aaba7e6b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724958486293343065,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 315329e4-6bc2-4164-a37f-2d9d7857eba1,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c8e7bcbfebef995192df5d64f603c9d08c91b1a799ccb037398d00858d33ba,PodSandboxId:fe391f299e153d6c0972dbdf457c92e1ad975a4e3574b15f1ba653832b929f55,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724957874945557865,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-m8795,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 3289f49d-61c4-4693-818b-5a8e73d95410,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54164e21bd9a6b72fe31085cb8cbe33df2bbc8670c01312f597d476f78b5d1c,PodSandboxId:05bebeb94a32a8d55a95ed5362201cbf7e480ead93a4fbbbeeee251aac433a08,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724957825221794572,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-8988944d9-9tplt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427d61c8-9ff3-4718-9faf-896d20af6cdc,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a0a245e481d700614dae7bfc6d7d4bbe9f074615c88e1373f761abb63e08cc,PodSandboxId:caa68615ea5869f55238af371cf096755b420e33837f56a5391355cc5270a453,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1724957800386945011,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e72aaf-6cd6-4740-a9d5-e4a739fed914,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1,PodSandboxId:78300c884569d8c916441667591ef1a6cdfdc769a616ccf684cd3aeb6ee173a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:17249577979
14552214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t9nhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01782eed-98db-4768-8ab6-bd429fe58305,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565,PodSandboxId:2f0f516b497dee578fa502336c368d6a64299cb73064fc27e94ba33dcd0c9623,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560
a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724957793494579868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lgcxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0be1dddc-793d-471e-aa16-9752951fb72a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef,PodSandboxId:bc4dfc643a4f95574b0c4d436e0af5e463ff66fcda536cf28cb9fd9980b55ae3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724957783507566907,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 433e7620b7da2027dd73dc3bddb2f997,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24,PodSandboxId:f9feaafb78b8d9b6c588088d4158b1b980da872d4034937746daf0b1ac5998c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724957783501034123,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df35d1b995ba56a5ea532995ddbeb880,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459,PodSandboxId:5b90ade16a1ec3848984ac4ccc079dbc33282af9bb7be159d955ed993979dd7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSp
ecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724957783509496209,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3105eec1aeed59eaefbe2b389301917,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24,PodSandboxId:948b38ffb05be778e3450b635de34556972385a53111e037a04d34383f52c377,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724957783431784942,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba40dc401b574376e18cc2969d87be7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ce550941-c7a1-4e2b-a292-f0dda9d73e47 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:12:40 addons-344587 crio[658]: time="2024-08-29 19:12:40.254621154Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d5e16618-853d-4b2c-b2ee-d7d1d605246a name=/runtime.v1.RuntimeService/Version
	Aug 29 19:12:40 addons-344587 crio[658]: time="2024-08-29 19:12:40.254753351Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d5e16618-853d-4b2c-b2ee-d7d1d605246a name=/runtime.v1.RuntimeService/Version
	Aug 29 19:12:40 addons-344587 crio[658]: time="2024-08-29 19:12:40.256459188Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c5f757fd-b000-4fd7-9369-e1f7584e8832 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:12:40 addons-344587 crio[658]: time="2024-08-29 19:12:40.257665539Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958760257638662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c5f757fd-b000-4fd7-9369-e1f7584e8832 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:12:40 addons-344587 crio[658]: time="2024-08-29 19:12:40.258322182Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=66d48a59-8ad8-44ee-bbfa-d3ca1ba157a6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:12:40 addons-344587 crio[658]: time="2024-08-29 19:12:40.258392955Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=66d48a59-8ad8-44ee-bbfa-d3ca1ba157a6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:12:40 addons-344587 crio[658]: time="2024-08-29 19:12:40.258815926Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5cd8313ff37bae72eced3f03f75a5b3dda5256094e6c9637614a07540de04d6b,PodSandboxId:31a4794e76c5a6d4324d26b7e11b8620a4b1fbb349a6204ccff8f657aac6e767,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724958626537636688,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-4lddf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1287ae1-fe54-458a-97b2-472886127905,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89ab3a824e7e135eccb2feebf6514044e173d635da6a72ba4865fa34b9d7b554,PodSandboxId:9f57bccc4a62cb14a1b0d1f9d9e7d3383fc17afb7d58885da58d33d9aaba7e6b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724958486293343065,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 315329e4-6bc2-4164-a37f-2d9d7857eba1,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c8e7bcbfebef995192df5d64f603c9d08c91b1a799ccb037398d00858d33ba,PodSandboxId:fe391f299e153d6c0972dbdf457c92e1ad975a4e3574b15f1ba653832b929f55,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724957874945557865,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-m8795,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 3289f49d-61c4-4693-818b-5a8e73d95410,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54164e21bd9a6b72fe31085cb8cbe33df2bbc8670c01312f597d476f78b5d1c,PodSandboxId:05bebeb94a32a8d55a95ed5362201cbf7e480ead93a4fbbbeeee251aac433a08,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724957825221794572,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-8988944d9-9tplt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427d61c8-9ff3-4718-9faf-896d20af6cdc,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a0a245e481d700614dae7bfc6d7d4bbe9f074615c88e1373f761abb63e08cc,PodSandboxId:caa68615ea5869f55238af371cf096755b420e33837f56a5391355cc5270a453,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1724957800386945011,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e72aaf-6cd6-4740-a9d5-e4a739fed914,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1,PodSandboxId:78300c884569d8c916441667591ef1a6cdfdc769a616ccf684cd3aeb6ee173a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:17249577979
14552214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t9nhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01782eed-98db-4768-8ab6-bd429fe58305,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565,PodSandboxId:2f0f516b497dee578fa502336c368d6a64299cb73064fc27e94ba33dcd0c9623,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560
a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724957793494579868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lgcxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0be1dddc-793d-471e-aa16-9752951fb72a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef,PodSandboxId:bc4dfc643a4f95574b0c4d436e0af5e463ff66fcda536cf28cb9fd9980b55ae3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724957783507566907,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 433e7620b7da2027dd73dc3bddb2f997,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24,PodSandboxId:f9feaafb78b8d9b6c588088d4158b1b980da872d4034937746daf0b1ac5998c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724957783501034123,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df35d1b995ba56a5ea532995ddbeb880,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459,PodSandboxId:5b90ade16a1ec3848984ac4ccc079dbc33282af9bb7be159d955ed993979dd7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSp
ecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724957783509496209,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3105eec1aeed59eaefbe2b389301917,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24,PodSandboxId:948b38ffb05be778e3450b635de34556972385a53111e037a04d34383f52c377,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724957783431784942,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-344587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba40dc401b574376e18cc2969d87be7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=66d48a59-8ad8-44ee-bbfa-d3ca1ba157a6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5cd8313ff37ba       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   31a4794e76c5a       hello-world-app-55bf9c44b4-4lddf
	89ab3a824e7e1       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                         4 minutes ago       Running             nginx                     0                   9f57bccc4a62c       nginx
	e9c8e7bcbfebe       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            14 minutes ago      Running             gcp-auth                  0                   fe391f299e153       gcp-auth-89d5ffd79-m8795
	d54164e21bd9a       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   15 minutes ago      Running             metrics-server            0                   05bebeb94a32a       metrics-server-8988944d9-9tplt
	15a0a245e481d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        15 minutes ago      Running             storage-provisioner       0                   caa68615ea586       storage-provisioner
	edffa46b48365       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        16 minutes ago      Running             coredns                   0                   78300c884569d       coredns-6f6b679f8f-t9nhw
	e6b94afd2073c       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        16 minutes ago      Running             kube-proxy                0                   2f0f516b497de       kube-proxy-lgcxw
	3a9bf9036a456       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        16 minutes ago      Running             etcd                      0                   5b90ade16a1ec       etcd-addons-344587
	46ea401f11d33       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        16 minutes ago      Running             kube-scheduler            0                   bc4dfc643a4f9       kube-scheduler-addons-344587
	79990e8cc7f54       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        16 minutes ago      Running             kube-controller-manager   0                   f9feaafb78b8d       kube-controller-manager-addons-344587
	ca9198782e10b       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        16 minutes ago      Running             kube-apiserver            0                   948b38ffb05be       kube-apiserver-addons-344587
	
	
	==> coredns [edffa46b483651778d1c07ec98265e047b98fe1378f03040aea8f5bfcae417c1] <==
	[INFO] 10.244.0.7:39145 - 42946 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00005377s
	[INFO] 10.244.0.22:42016 - 45991 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000368592s
	[INFO] 10.244.0.22:34827 - 63066 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000120289s
	[INFO] 10.244.0.22:43077 - 9805 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000099235s
	[INFO] 10.244.0.22:42369 - 39774 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000079385s
	[INFO] 10.244.0.22:60024 - 29907 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114811s
	[INFO] 10.244.0.22:45308 - 1618 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000060509s
	[INFO] 10.244.0.22:58816 - 5970 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001084367s
	[INFO] 10.244.0.22:42307 - 58779 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.0009186s
	[INFO] 10.244.0.7:44744 - 64553 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000299877s
	[INFO] 10.244.0.7:44744 - 5421 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000076334s
	[INFO] 10.244.0.7:46191 - 55261 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114362s
	[INFO] 10.244.0.7:46191 - 4319 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000078476s
	[INFO] 10.244.0.7:37623 - 4000 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000088295s
	[INFO] 10.244.0.7:37623 - 54189 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000065651s
	[INFO] 10.244.0.7:37785 - 7471 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000114015s
	[INFO] 10.244.0.7:37785 - 24365 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110504s
	[INFO] 10.244.0.7:60734 - 39177 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000153016s
	[INFO] 10.244.0.7:60734 - 36925 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00004885s
	[INFO] 10.244.0.7:56476 - 49913 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091243s
	[INFO] 10.244.0.7:56476 - 39675 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000038412s
	[INFO] 10.244.0.7:52181 - 34800 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00004683s
	[INFO] 10.244.0.7:52181 - 48114 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000035557s
	[INFO] 10.244.0.7:44052 - 60911 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000130226s
	[INFO] 10.244.0.7:44052 - 20460 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000063327s
	
	
	==> describe nodes <==
	Name:               addons-344587
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-344587
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=addons-344587
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T18_56_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-344587
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:56:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-344587
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:12:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:10:33 +0000   Thu, 29 Aug 2024 18:56:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:10:33 +0000   Thu, 29 Aug 2024 18:56:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:10:33 +0000   Thu, 29 Aug 2024 18:56:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:10:33 +0000   Thu, 29 Aug 2024 18:56:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.172
	  Hostname:    addons-344587
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 260355e6785f4e7bb1e92498cafe0432
	  System UUID:                260355e6-785f-4e7b-b1e9-2498cafe0432
	  Boot ID:                    63059b99-f440-429e-a6ac-c800d57acda3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-4lddf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  gcp-auth                    gcp-auth-89d5ffd79-m8795                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-6f6b679f8f-t9nhw                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     16m
	  kube-system                 etcd-addons-344587                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         16m
	  kube-system                 kube-apiserver-addons-344587             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-addons-344587    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-lgcxw                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-addons-344587             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node addons-344587 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node addons-344587 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node addons-344587 status is now: NodeHasSufficientPID
	  Normal  NodeReady                16m   kubelet          Node addons-344587 status is now: NodeReady
	  Normal  RegisteredNode           16m   node-controller  Node addons-344587 event: Registered Node addons-344587 in Controller
	
	
	==> dmesg <==
	[  +5.090259] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.165082] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.848747] kauditd_printk_skb: 52 callbacks suppressed
	[  +5.168731] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.203781] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.023105] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.195417] kauditd_printk_skb: 3 callbacks suppressed
	[Aug29 18:58] kauditd_printk_skb: 49 callbacks suppressed
	[ +39.477849] kauditd_printk_skb: 28 callbacks suppressed
	[Aug29 18:59] kauditd_printk_skb: 2 callbacks suppressed
	[Aug29 19:00] kauditd_printk_skb: 28 callbacks suppressed
	[Aug29 19:03] kauditd_printk_skb: 28 callbacks suppressed
	[Aug29 19:06] kauditd_printk_skb: 28 callbacks suppressed
	[Aug29 19:07] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.472194] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.550952] kauditd_printk_skb: 23 callbacks suppressed
	[  +7.540533] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.552049] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.997754] kauditd_printk_skb: 58 callbacks suppressed
	[  +6.004601] kauditd_printk_skb: 36 callbacks suppressed
	[  +5.537055] kauditd_printk_skb: 12 callbacks suppressed
	[ +11.328650] kauditd_printk_skb: 11 callbacks suppressed
	[Aug29 19:08] kauditd_printk_skb: 46 callbacks suppressed
	[Aug29 19:10] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.194292] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [3a9bf9036a4561bc99725754053def0c9573b84db53002fe9840e3e333f14459] <==
	{"level":"info","ts":"2024-08-29T18:57:51.987950Z","caller":"traceutil/trace.go:171","msg":"trace[877493432] linearizableReadLoop","detail":"{readStateIndex:1161; appliedIndex:1160; }","duration":"331.031224ms","start":"2024-08-29T18:57:51.656906Z","end":"2024-08-29T18:57:51.987937Z","steps":["trace[877493432] 'read index received'  (duration: 330.742237ms)","trace[877493432] 'applied index is now lower than readState.Index'  (duration: 288.514µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-29T18:57:51.988189Z","caller":"traceutil/trace.go:171","msg":"trace[174649903] transaction","detail":"{read_only:false; response_revision:1128; number_of_response:1; }","duration":"450.012484ms","start":"2024-08-29T18:57:51.538165Z","end":"2024-08-29T18:57:51.988178Z","steps":["trace[174649903] 'process raft request'  (duration: 449.679178ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:57:51.988293Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T18:57:51.538147Z","time spent":"450.082879ms","remote":"127.0.0.1:36120","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1125 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-29T18:57:51.988443Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"331.528183ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:57:51.988481Z","caller":"traceutil/trace.go:171","msg":"trace[1935103251] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1128; }","duration":"331.569979ms","start":"2024-08-29T18:57:51.656903Z","end":"2024-08-29T18:57:51.988473Z","steps":["trace[1935103251] 'agreement among raft nodes before linearized reading'  (duration: 331.509051ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:57:51.988540Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T18:57:51.656873Z","time spent":"331.661655ms","remote":"127.0.0.1:36130","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-08-29T18:57:51.988641Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"318.4247ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:57:51.988771Z","caller":"traceutil/trace.go:171","msg":"trace[676173896] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1128; }","duration":"318.553242ms","start":"2024-08-29T18:57:51.670211Z","end":"2024-08-29T18:57:51.988764Z","steps":["trace[676173896] 'agreement among raft nodes before linearized reading'  (duration: 318.41108ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:57:51.988815Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T18:57:51.670179Z","time spent":"318.622729ms","remote":"127.0.0.1:36130","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-08-29T18:57:51.988972Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.900574ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-8988944d9-9tplt\" ","response":"range_response_count:1 size:4561"}
	{"level":"info","ts":"2024-08-29T18:57:51.989006Z","caller":"traceutil/trace.go:171","msg":"trace[1013009811] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-8988944d9-9tplt; range_end:; response_count:1; response_revision:1128; }","duration":"129.932706ms","start":"2024-08-29T18:57:51.859067Z","end":"2024-08-29T18:57:51.989000Z","steps":["trace[1013009811] 'agreement among raft nodes before linearized reading'  (duration: 129.851129ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:58:02.393148Z","caller":"traceutil/trace.go:171","msg":"trace[728866711] linearizableReadLoop","detail":"{readStateIndex:1216; appliedIndex:1215; }","duration":"245.819709ms","start":"2024-08-29T18:58:02.147307Z","end":"2024-08-29T18:58:02.393126Z","steps":["trace[728866711] 'read index received'  (duration: 245.635911ms)","trace[728866711] 'applied index is now lower than readState.Index'  (duration: 183.347µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-29T18:58:02.393518Z","caller":"traceutil/trace.go:171","msg":"trace[873289508] transaction","detail":"{read_only:false; response_revision:1181; number_of_response:1; }","duration":"341.313574ms","start":"2024-08-29T18:58:02.052193Z","end":"2024-08-29T18:58:02.393507Z","steps":["trace[873289508] 'process raft request'  (duration: 340.79613ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:58:02.393725Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T18:58:02.052166Z","time spent":"341.432789ms","remote":"127.0.0.1:36120","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1176 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-29T18:58:02.393897Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.589501ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:58:02.394190Z","caller":"traceutil/trace.go:171","msg":"trace[902699692] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1181; }","duration":"246.879248ms","start":"2024-08-29T18:58:02.147300Z","end":"2024-08-29T18:58:02.394179Z","steps":["trace[902699692] 'agreement among raft nodes before linearized reading'  (duration: 246.576816ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:58:02.394293Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.664177ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:58:02.395198Z","caller":"traceutil/trace.go:171","msg":"trace[135783897] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1181; }","duration":"151.567211ms","start":"2024-08-29T18:58:02.243617Z","end":"2024-08-29T18:58:02.395184Z","steps":["trace[135783897] 'agreement among raft nodes before linearized reading'  (duration: 150.639245ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T19:06:24.607992Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1539}
	{"level":"info","ts":"2024-08-29T19:06:24.642194Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1539,"took":"33.69634ms","hash":279284985,"current-db-size-bytes":6299648,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":3297280,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-08-29T19:06:24.642264Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":279284985,"revision":1539,"compact-revision":-1}
	{"level":"info","ts":"2024-08-29T19:08:39.629048Z","caller":"traceutil/trace.go:171","msg":"trace[518339055] transaction","detail":"{read_only:false; response_revision:2617; number_of_response:1; }","duration":"111.246564ms","start":"2024-08-29T19:08:39.517761Z","end":"2024-08-29T19:08:39.629008Z","steps":["trace[518339055] 'process raft request'  (duration: 111.127875ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T19:11:24.615984Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1958}
	{"level":"info","ts":"2024-08-29T19:11:24.635910Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1958,"took":"19.295351ms","hash":3900413476,"current-db-size-bytes":6299648,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":4947968,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-08-29T19:11:24.636027Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3900413476,"revision":1958,"compact-revision":1539}
	
	
	==> gcp-auth [e9c8e7bcbfebef995192df5d64f603c9d08c91b1a799ccb037398d00858d33ba] <==
	2024/08/29 18:58:45 Ready to write response ...
	2024/08/29 19:06:55 Ready to marshal response ...
	2024/08/29 19:06:55 Ready to write response ...
	2024/08/29 19:06:59 Ready to marshal response ...
	2024/08/29 19:06:59 Ready to write response ...
	2024/08/29 19:07:05 Ready to marshal response ...
	2024/08/29 19:07:05 Ready to write response ...
	2024/08/29 19:07:23 Ready to marshal response ...
	2024/08/29 19:07:23 Ready to write response ...
	2024/08/29 19:07:27 Ready to marshal response ...
	2024/08/29 19:07:27 Ready to write response ...
	2024/08/29 19:07:27 Ready to marshal response ...
	2024/08/29 19:07:27 Ready to write response ...
	2024/08/29 19:07:39 Ready to marshal response ...
	2024/08/29 19:07:39 Ready to write response ...
	2024/08/29 19:07:45 Ready to marshal response ...
	2024/08/29 19:07:45 Ready to write response ...
	2024/08/29 19:07:45 Ready to marshal response ...
	2024/08/29 19:07:45 Ready to write response ...
	2024/08/29 19:07:45 Ready to marshal response ...
	2024/08/29 19:07:45 Ready to write response ...
	2024/08/29 19:08:03 Ready to marshal response ...
	2024/08/29 19:08:03 Ready to write response ...
	2024/08/29 19:10:24 Ready to marshal response ...
	2024/08/29 19:10:24 Ready to write response ...
	
	
	==> kernel <==
	 19:12:40 up 16 min,  0 users,  load average: 0.17, 0.49, 0.48
	Linux addons-344587 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ca9198782e10bd0b9d12b3575b8e6edde07afb66eec33585d3939939ecda8d24] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0829 18:58:14.652298       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0829 18:58:14.653137       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0829 19:06:55.073349       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0829 19:06:56.105761       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0829 19:07:09.910156       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0829 19:07:38.524327       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 19:07:38.524397       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 19:07:38.583728       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 19:07:38.583790       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 19:07:38.603042       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 19:07:38.603098       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 19:07:38.701294       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 19:07:38.701595       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 19:07:38.708794       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 19:07:38.708838       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0829 19:07:39.703951       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0829 19:07:39.709278       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0829 19:07:39.726588       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0829 19:07:45.314768       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.44.227"}
	E0829 19:07:55.404266       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0829 19:08:03.038521       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0829 19:08:03.213568       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.215.4"}
	I0829 19:10:24.590158       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.51.103"}
	
	
	==> kube-controller-manager [79990e8cc7f54ad4a97287d4ccbb394e12a050846e1fa91d0d63e9fbf7ee2c24] <==
	W0829 19:10:50.849002       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:10:50.849073       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:10:51.032284       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:10:51.032405       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:11:13.597594       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:11:13.597740       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:11:20.322807       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:11:20.322866       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:11:23.239177       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:11:23.239420       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:11:28.006167       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:11:28.006224       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:11:54.345932       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:11:54.346005       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:11:57.430763       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:11:57.430829       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:12:15.297840       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:12:15.297887       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:12:16.960571       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:12:16.960622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:12:27.825850       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:12:27.826007       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:12:33.347978       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:12:33.348102       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 19:12:39.229421       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-8988944d9" duration="11.34µs"
	
	
	==> kube-proxy [e6b94afd2073c102305df98109bc1be8ec5d22093fb62f2a1641025e9cf82565] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 18:56:34.273624       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 18:56:34.316749       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.172"]
	E0829 18:56:34.319592       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 18:56:34.449783       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 18:56:34.449822       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 18:56:34.449854       1 server_linux.go:169] "Using iptables Proxier"
	I0829 18:56:34.453213       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 18:56:34.453462       1 server.go:483] "Version info" version="v1.31.0"
	I0829 18:56:34.453493       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:56:34.455243       1 config.go:197] "Starting service config controller"
	I0829 18:56:34.455281       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 18:56:34.455307       1 config.go:104] "Starting endpoint slice config controller"
	I0829 18:56:34.455311       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 18:56:34.455326       1 config.go:326] "Starting node config controller"
	I0829 18:56:34.455330       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 18:56:34.555742       1 shared_informer.go:320] Caches are synced for node config
	I0829 18:56:34.555769       1 shared_informer.go:320] Caches are synced for service config
	I0829 18:56:34.555789       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [46ea401f11d339127c80cb474a25433836d69db950e5b3ef45c15286db5b19ef] <==
	W0829 18:56:25.839581       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0829 18:56:25.839612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:25.839779       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0829 18:56:25.839809       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:25.839854       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 18:56:25.839956       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:25.839906       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 18:56:25.840087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:25.844097       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0829 18:56:25.844135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:26.723397       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0829 18:56:26.723437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:26.726980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0829 18:56:26.727065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:26.764739       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0829 18:56:26.764937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:26.775798       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 18:56:26.775919       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:26.923490       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0829 18:56:26.923522       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0829 18:56:26.981178       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0829 18:56:26.981308       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:27.115445       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 18:56:27.115543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0829 18:56:29.434203       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 19:11:58 addons-344587 kubelet[1210]: E0829 19:11:58.822521    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958718821940057,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:11:58 addons-344587 kubelet[1210]: E0829 19:11:58.822632    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958718821940057,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:12:08 addons-344587 kubelet[1210]: E0829 19:12:08.825383    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958728825001334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:12:08 addons-344587 kubelet[1210]: E0829 19:12:08.825665    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958728825001334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:12:11 addons-344587 kubelet[1210]: E0829 19:12:11.221107    1210 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="ea5b7a84-ebbb-47e9-92c8-ee98926439ae"
	Aug 29 19:12:18 addons-344587 kubelet[1210]: E0829 19:12:18.828369    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958738827940474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:12:18 addons-344587 kubelet[1210]: E0829 19:12:18.828642    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958738827940474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:12:22 addons-344587 kubelet[1210]: E0829 19:12:22.221315    1210 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="ea5b7a84-ebbb-47e9-92c8-ee98926439ae"
	Aug 29 19:12:28 addons-344587 kubelet[1210]: E0829 19:12:28.254156    1210 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 19:12:28 addons-344587 kubelet[1210]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 19:12:28 addons-344587 kubelet[1210]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 19:12:28 addons-344587 kubelet[1210]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 19:12:28 addons-344587 kubelet[1210]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 19:12:28 addons-344587 kubelet[1210]: E0829 19:12:28.831118    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958748830804066,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:12:28 addons-344587 kubelet[1210]: E0829 19:12:28.831158    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958748830804066,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:12:34 addons-344587 kubelet[1210]: E0829 19:12:34.221351    1210 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="ea5b7a84-ebbb-47e9-92c8-ee98926439ae"
	Aug 29 19:12:38 addons-344587 kubelet[1210]: E0829 19:12:38.834559    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958758834265372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:12:38 addons-344587 kubelet[1210]: E0829 19:12:38.834601    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958758834265372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:12:39 addons-344587 kubelet[1210]: I0829 19:12:39.258263    1210 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-4lddf" podStartSLOduration=133.724559949 podStartE2EDuration="2m15.258227331s" podCreationTimestamp="2024-08-29 19:10:24 +0000 UTC" firstStartedPulling="2024-08-29 19:10:24.985163105 +0000 UTC m=+836.944934961" lastFinishedPulling="2024-08-29 19:10:26.518830487 +0000 UTC m=+838.478602343" observedRunningTime="2024-08-29 19:10:26.950004785 +0000 UTC m=+838.909776666" watchObservedRunningTime="2024-08-29 19:12:39.258227331 +0000 UTC m=+971.217999204"
	Aug 29 19:12:40 addons-344587 kubelet[1210]: I0829 19:12:40.638496    1210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bzlm\" (UniqueName: \"kubernetes.io/projected/427d61c8-9ff3-4718-9faf-896d20af6cdc-kube-api-access-8bzlm\") pod \"427d61c8-9ff3-4718-9faf-896d20af6cdc\" (UID: \"427d61c8-9ff3-4718-9faf-896d20af6cdc\") "
	Aug 29 19:12:40 addons-344587 kubelet[1210]: I0829 19:12:40.638555    1210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/427d61c8-9ff3-4718-9faf-896d20af6cdc-tmp-dir\") pod \"427d61c8-9ff3-4718-9faf-896d20af6cdc\" (UID: \"427d61c8-9ff3-4718-9faf-896d20af6cdc\") "
	Aug 29 19:12:40 addons-344587 kubelet[1210]: I0829 19:12:40.639185    1210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/427d61c8-9ff3-4718-9faf-896d20af6cdc-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "427d61c8-9ff3-4718-9faf-896d20af6cdc" (UID: "427d61c8-9ff3-4718-9faf-896d20af6cdc"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 29 19:12:40 addons-344587 kubelet[1210]: I0829 19:12:40.645995    1210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/427d61c8-9ff3-4718-9faf-896d20af6cdc-kube-api-access-8bzlm" (OuterVolumeSpecName: "kube-api-access-8bzlm") pod "427d61c8-9ff3-4718-9faf-896d20af6cdc" (UID: "427d61c8-9ff3-4718-9faf-896d20af6cdc"). InnerVolumeSpecName "kube-api-access-8bzlm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 19:12:40 addons-344587 kubelet[1210]: I0829 19:12:40.739530    1210 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8bzlm\" (UniqueName: \"kubernetes.io/projected/427d61c8-9ff3-4718-9faf-896d20af6cdc-kube-api-access-8bzlm\") on node \"addons-344587\" DevicePath \"\""
	Aug 29 19:12:40 addons-344587 kubelet[1210]: I0829 19:12:40.739643    1210 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/427d61c8-9ff3-4718-9faf-896d20af6cdc-tmp-dir\") on node \"addons-344587\" DevicePath \"\""
	
	
	==> storage-provisioner [15a0a245e481d700614dae7bfc6d7d4bbe9f074615c88e1373f761abb63e08cc] <==
	I0829 18:56:42.218258       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 18:56:42.884144       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 18:56:42.967214       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 18:56:43.042082       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 18:56:43.043164       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"aa6f651c-dee9-4c5c-bb08-efe5aaec9d98", APIVersion:"v1", ResourceVersion:"717", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-344587_b654bfc4-1e7a-4b37-abe2-9c326f1dacc1 became leader
	I0829 18:56:43.043203       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-344587_b654bfc4-1e7a-4b37-abe2-9c326f1dacc1!
	I0829 18:56:43.212968       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-344587_b654bfc4-1e7a-4b37-abe2-9c326f1dacc1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-344587 -n addons-344587
helpers_test.go:261: (dbg) Run:  kubectl --context addons-344587 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox metrics-server-8988944d9-9tplt
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-344587 describe pod busybox metrics-server-8988944d9-9tplt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-344587 describe pod busybox metrics-server-8988944d9-9tplt: exit status 1 (68.022093ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-344587/192.168.39.172
	Start Time:       Thu, 29 Aug 2024 18:58:45 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bb56t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bb56t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  13m                   default-scheduler  Successfully assigned default/busybox to addons-344587
	  Normal   Pulling    12m (x4 over 13m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     12m (x4 over 13m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     12m (x4 over 13m)     kubelet            Error: ErrImagePull
	  Warning  Failed     12m (x6 over 13m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m44s (x43 over 13m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "metrics-server-8988944d9-9tplt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-344587 describe pod busybox metrics-server-8988944d9-9tplt: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (352.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 node stop m02 -v=7 --alsologtostderr
E0829 19:22:18.920305   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:22:59.881886   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:23:45.974948   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-505269 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.462004687s)

                                                
                                                
-- stdout --
	* Stopping node "ha-505269-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:22:09.868383   34381 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:22:09.868542   34381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:22:09.868555   34381 out.go:358] Setting ErrFile to fd 2...
	I0829 19:22:09.868560   34381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:22:09.868732   34381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 19:22:09.868977   34381 mustload.go:65] Loading cluster: ha-505269
	I0829 19:22:09.869315   34381 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:22:09.869329   34381 stop.go:39] StopHost: ha-505269-m02
	I0829 19:22:09.869757   34381 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:22:09.869810   34381 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:22:09.885288   34381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35075
	I0829 19:22:09.885744   34381 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:22:09.886354   34381 main.go:141] libmachine: Using API Version  1
	I0829 19:22:09.886380   34381 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:22:09.886744   34381 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:22:09.888977   34381 out.go:177] * Stopping node "ha-505269-m02"  ...
	I0829 19:22:09.890170   34381 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0829 19:22:09.890198   34381 main.go:141] libmachine: (ha-505269-m02) Calling .DriverName
	I0829 19:22:09.890417   34381 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0829 19:22:09.890459   34381 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:22:09.893504   34381 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:22:09.893918   34381 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:22:09.893947   34381 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:22:09.894122   34381 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:22:09.894275   34381 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:22:09.894447   34381 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:22:09.894643   34381 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/id_rsa Username:docker}
	I0829 19:22:09.978057   34381 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0829 19:22:10.031803   34381 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0829 19:22:10.093544   34381 main.go:141] libmachine: Stopping "ha-505269-m02"...
	I0829 19:22:10.093579   34381 main.go:141] libmachine: (ha-505269-m02) Calling .GetState
	I0829 19:22:10.095055   34381 main.go:141] libmachine: (ha-505269-m02) Calling .Stop
	I0829 19:22:10.098576   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 0/120
	I0829 19:22:11.099908   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 1/120
	I0829 19:22:12.101267   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 2/120
	I0829 19:22:13.102527   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 3/120
	I0829 19:22:14.103715   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 4/120
	I0829 19:22:15.105567   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 5/120
	I0829 19:22:16.107267   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 6/120
	I0829 19:22:17.109059   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 7/120
	I0829 19:22:18.110266   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 8/120
	I0829 19:22:19.112338   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 9/120
	I0829 19:22:20.114609   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 10/120
	I0829 19:22:21.116125   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 11/120
	I0829 19:22:22.117459   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 12/120
	I0829 19:22:23.118992   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 13/120
	I0829 19:22:24.120982   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 14/120
	I0829 19:22:25.123044   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 15/120
	I0829 19:22:26.124990   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 16/120
	I0829 19:22:27.127021   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 17/120
	I0829 19:22:28.128949   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 18/120
	I0829 19:22:29.130193   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 19/120
	I0829 19:22:30.132311   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 20/120
	I0829 19:22:31.133795   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 21/120
	I0829 19:22:32.134995   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 22/120
	I0829 19:22:33.136409   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 23/120
	I0829 19:22:34.138100   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 24/120
	I0829 19:22:35.139975   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 25/120
	I0829 19:22:36.141258   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 26/120
	I0829 19:22:37.142463   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 27/120
	I0829 19:22:38.143760   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 28/120
	I0829 19:22:39.145146   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 29/120
	I0829 19:22:40.147035   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 30/120
	I0829 19:22:41.149065   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 31/120
	I0829 19:22:42.150318   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 32/120
	I0829 19:22:43.151660   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 33/120
	I0829 19:22:44.152875   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 34/120
	I0829 19:22:45.155041   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 35/120
	I0829 19:22:46.156560   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 36/120
	I0829 19:22:47.157718   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 37/120
	I0829 19:22:48.159514   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 38/120
	I0829 19:22:49.160625   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 39/120
	I0829 19:22:50.162694   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 40/120
	I0829 19:22:51.165154   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 41/120
	I0829 19:22:52.166255   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 42/120
	I0829 19:22:53.167539   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 43/120
	I0829 19:22:54.168756   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 44/120
	I0829 19:22:55.170685   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 45/120
	I0829 19:22:56.171882   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 46/120
	I0829 19:22:57.173245   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 47/120
	I0829 19:22:58.174504   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 48/120
	I0829 19:22:59.175808   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 49/120
	I0829 19:23:00.177788   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 50/120
	I0829 19:23:01.179494   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 51/120
	I0829 19:23:02.181060   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 52/120
	I0829 19:23:03.182494   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 53/120
	I0829 19:23:04.183696   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 54/120
	I0829 19:23:05.185511   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 55/120
	I0829 19:23:06.186911   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 56/120
	I0829 19:23:07.188928   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 57/120
	I0829 19:23:08.190077   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 58/120
	I0829 19:23:09.191269   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 59/120
	I0829 19:23:10.193743   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 60/120
	I0829 19:23:11.195214   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 61/120
	I0829 19:23:12.196614   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 62/120
	I0829 19:23:13.198367   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 63/120
	I0829 19:23:14.199625   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 64/120
	I0829 19:23:15.201606   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 65/120
	I0829 19:23:16.202946   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 66/120
	I0829 19:23:17.204926   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 67/120
	I0829 19:23:18.206149   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 68/120
	I0829 19:23:19.207444   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 69/120
	I0829 19:23:20.209326   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 70/120
	I0829 19:23:21.211070   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 71/120
	I0829 19:23:22.212877   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 72/120
	I0829 19:23:23.214402   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 73/120
	I0829 19:23:24.215629   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 74/120
	I0829 19:23:25.217653   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 75/120
	I0829 19:23:26.218880   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 76/120
	I0829 19:23:27.220033   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 77/120
	I0829 19:23:28.221448   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 78/120
	I0829 19:23:29.222581   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 79/120
	I0829 19:23:30.224870   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 80/120
	I0829 19:23:31.226115   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 81/120
	I0829 19:23:32.227470   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 82/120
	I0829 19:23:33.229458   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 83/120
	I0829 19:23:34.230841   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 84/120
	I0829 19:23:35.232599   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 85/120
	I0829 19:23:36.233848   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 86/120
	I0829 19:23:37.235093   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 87/120
	I0829 19:23:38.236824   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 88/120
	I0829 19:23:39.237959   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 89/120
	I0829 19:23:40.240040   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 90/120
	I0829 19:23:41.241279   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 91/120
	I0829 19:23:42.242721   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 92/120
	I0829 19:23:43.243902   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 93/120
	I0829 19:23:44.245348   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 94/120
	I0829 19:23:45.246717   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 95/120
	I0829 19:23:46.248031   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 96/120
	I0829 19:23:47.249430   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 97/120
	I0829 19:23:48.250952   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 98/120
	I0829 19:23:49.253012   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 99/120
	I0829 19:23:50.254823   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 100/120
	I0829 19:23:51.256906   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 101/120
	I0829 19:23:52.258138   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 102/120
	I0829 19:23:53.259492   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 103/120
	I0829 19:23:54.260938   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 104/120
	I0829 19:23:55.262714   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 105/120
	I0829 19:23:56.264926   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 106/120
	I0829 19:23:57.266975   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 107/120
	I0829 19:23:58.268364   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 108/120
	I0829 19:23:59.270420   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 109/120
	I0829 19:24:00.272319   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 110/120
	I0829 19:24:01.273592   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 111/120
	I0829 19:24:02.275730   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 112/120
	I0829 19:24:03.277299   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 113/120
	I0829 19:24:04.278822   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 114/120
	I0829 19:24:05.280500   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 115/120
	I0829 19:24:06.282173   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 116/120
	I0829 19:24:07.283345   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 117/120
	I0829 19:24:08.284623   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 118/120
	I0829 19:24:09.286057   34381 main.go:141] libmachine: (ha-505269-m02) Waiting for machine to stop 119/120
	I0829 19:24:10.287318   34381 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0829 19:24:10.287463   34381 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-505269 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 status -v=7 --alsologtostderr
E0829 19:24:21.804236   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-505269 status -v=7 --alsologtostderr: exit status 3 (19.212822469s)

                                                
                                                
-- stdout --
	ha-505269
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-505269-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-505269-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-505269-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:24:10.330073   34801 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:24:10.330323   34801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:24:10.330339   34801 out.go:358] Setting ErrFile to fd 2...
	I0829 19:24:10.330346   34801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:24:10.330531   34801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 19:24:10.330712   34801 out.go:352] Setting JSON to false
	I0829 19:24:10.330736   34801 mustload.go:65] Loading cluster: ha-505269
	I0829 19:24:10.330795   34801 notify.go:220] Checking for updates...
	I0829 19:24:10.331240   34801 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:24:10.331261   34801 status.go:255] checking status of ha-505269 ...
	I0829 19:24:10.331683   34801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:10.331749   34801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:10.349625   34801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44027
	I0829 19:24:10.350080   34801 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:10.350650   34801 main.go:141] libmachine: Using API Version  1
	I0829 19:24:10.350674   34801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:10.351050   34801 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:10.351278   34801 main.go:141] libmachine: (ha-505269) Calling .GetState
	I0829 19:24:10.352834   34801 status.go:330] ha-505269 host status = "Running" (err=<nil>)
	I0829 19:24:10.352851   34801 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:24:10.353298   34801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:10.353340   34801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:10.367877   34801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39791
	I0829 19:24:10.368296   34801 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:10.368793   34801 main.go:141] libmachine: Using API Version  1
	I0829 19:24:10.368829   34801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:10.369190   34801 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:10.369397   34801 main.go:141] libmachine: (ha-505269) Calling .GetIP
	I0829 19:24:10.372331   34801 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:24:10.372711   34801 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:24:10.372738   34801 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:24:10.372897   34801 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:24:10.373186   34801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:10.373223   34801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:10.387422   34801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34871
	I0829 19:24:10.387820   34801 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:10.388274   34801 main.go:141] libmachine: Using API Version  1
	I0829 19:24:10.388294   34801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:10.388588   34801 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:10.388780   34801 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:24:10.388982   34801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:24:10.389002   34801 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:24:10.391422   34801 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:24:10.391894   34801 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:24:10.391914   34801 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:24:10.392051   34801 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:24:10.392236   34801 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:24:10.392392   34801 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:24:10.392562   34801 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:24:10.488281   34801 ssh_runner.go:195] Run: systemctl --version
	I0829 19:24:10.495277   34801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:24:10.514807   34801 kubeconfig.go:125] found "ha-505269" server: "https://192.168.39.254:8443"
	I0829 19:24:10.514839   34801 api_server.go:166] Checking apiserver status ...
	I0829 19:24:10.514870   34801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:24:10.532360   34801 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1092/cgroup
	W0829 19:24:10.542503   34801 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1092/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:24:10.542571   34801 ssh_runner.go:195] Run: ls
	I0829 19:24:10.547202   34801 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 19:24:10.553347   34801 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 19:24:10.553368   34801 status.go:422] ha-505269 apiserver status = Running (err=<nil>)
	I0829 19:24:10.553376   34801 status.go:257] ha-505269 status: &{Name:ha-505269 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:24:10.553391   34801 status.go:255] checking status of ha-505269-m02 ...
	I0829 19:24:10.553727   34801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:10.553765   34801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:10.568169   34801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41395
	I0829 19:24:10.568583   34801 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:10.569001   34801 main.go:141] libmachine: Using API Version  1
	I0829 19:24:10.569021   34801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:10.569391   34801 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:10.569549   34801 main.go:141] libmachine: (ha-505269-m02) Calling .GetState
	I0829 19:24:10.571117   34801 status.go:330] ha-505269-m02 host status = "Running" (err=<nil>)
	I0829 19:24:10.571134   34801 host.go:66] Checking if "ha-505269-m02" exists ...
	I0829 19:24:10.571410   34801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:10.571446   34801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:10.587411   34801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37735
	I0829 19:24:10.587844   34801 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:10.588309   34801 main.go:141] libmachine: Using API Version  1
	I0829 19:24:10.588332   34801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:10.588703   34801 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:10.588898   34801 main.go:141] libmachine: (ha-505269-m02) Calling .GetIP
	I0829 19:24:10.591453   34801 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:24:10.591878   34801 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:24:10.591904   34801 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:24:10.592070   34801 host.go:66] Checking if "ha-505269-m02" exists ...
	I0829 19:24:10.592373   34801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:10.592416   34801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:10.606741   34801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39399
	I0829 19:24:10.607100   34801 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:10.607517   34801 main.go:141] libmachine: Using API Version  1
	I0829 19:24:10.607543   34801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:10.607827   34801 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:10.608011   34801 main.go:141] libmachine: (ha-505269-m02) Calling .DriverName
	I0829 19:24:10.608152   34801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:24:10.608171   34801 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:24:10.610788   34801 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:24:10.611205   34801 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:24:10.611234   34801 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:24:10.611344   34801 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:24:10.611506   34801 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:24:10.611641   34801 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:24:10.611773   34801 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/id_rsa Username:docker}
	W0829 19:24:29.154730   34801 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	W0829 19:24:29.154882   34801 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	E0829 19:24:29.154917   34801 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0829 19:24:29.154930   34801 status.go:257] ha-505269-m02 status: &{Name:ha-505269-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0829 19:24:29.154952   34801 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0829 19:24:29.154965   34801 status.go:255] checking status of ha-505269-m03 ...
	I0829 19:24:29.155278   34801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:29.155346   34801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:29.169711   34801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45895
	I0829 19:24:29.170168   34801 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:29.170644   34801 main.go:141] libmachine: Using API Version  1
	I0829 19:24:29.170668   34801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:29.170948   34801 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:29.171090   34801 main.go:141] libmachine: (ha-505269-m03) Calling .GetState
	I0829 19:24:29.172645   34801 status.go:330] ha-505269-m03 host status = "Running" (err=<nil>)
	I0829 19:24:29.172660   34801 host.go:66] Checking if "ha-505269-m03" exists ...
	I0829 19:24:29.173052   34801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:29.173099   34801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:29.187495   34801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39679
	I0829 19:24:29.187862   34801 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:29.188298   34801 main.go:141] libmachine: Using API Version  1
	I0829 19:24:29.188320   34801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:29.188606   34801 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:29.188820   34801 main.go:141] libmachine: (ha-505269-m03) Calling .GetIP
	I0829 19:24:29.191777   34801 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:24:29.192197   34801 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:24:29.192225   34801 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:24:29.192385   34801 host.go:66] Checking if "ha-505269-m03" exists ...
	I0829 19:24:29.192683   34801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:29.192720   34801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:29.207291   34801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33755
	I0829 19:24:29.207656   34801 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:29.208106   34801 main.go:141] libmachine: Using API Version  1
	I0829 19:24:29.208125   34801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:29.208417   34801 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:29.208591   34801 main.go:141] libmachine: (ha-505269-m03) Calling .DriverName
	I0829 19:24:29.208764   34801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:24:29.208784   34801 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:24:29.211364   34801 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:24:29.211792   34801 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:24:29.211818   34801 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:24:29.211957   34801 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:24:29.212098   34801 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:24:29.212241   34801 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:24:29.212390   34801 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/id_rsa Username:docker}
	I0829 19:24:29.291593   34801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:24:29.308454   34801 kubeconfig.go:125] found "ha-505269" server: "https://192.168.39.254:8443"
	I0829 19:24:29.308479   34801 api_server.go:166] Checking apiserver status ...
	I0829 19:24:29.308517   34801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:24:29.322199   34801 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1454/cgroup
	W0829 19:24:29.331944   34801 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1454/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:24:29.331998   34801 ssh_runner.go:195] Run: ls
	I0829 19:24:29.336510   34801 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 19:24:29.340828   34801 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 19:24:29.340846   34801 status.go:422] ha-505269-m03 apiserver status = Running (err=<nil>)
	I0829 19:24:29.340854   34801 status.go:257] ha-505269-m03 status: &{Name:ha-505269-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:24:29.340868   34801 status.go:255] checking status of ha-505269-m04 ...
	I0829 19:24:29.341132   34801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:29.341194   34801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:29.355801   34801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43109
	I0829 19:24:29.356222   34801 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:29.356630   34801 main.go:141] libmachine: Using API Version  1
	I0829 19:24:29.356648   34801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:29.356951   34801 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:29.357126   34801 main.go:141] libmachine: (ha-505269-m04) Calling .GetState
	I0829 19:24:29.358618   34801 status.go:330] ha-505269-m04 host status = "Running" (err=<nil>)
	I0829 19:24:29.358635   34801 host.go:66] Checking if "ha-505269-m04" exists ...
	I0829 19:24:29.358931   34801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:29.358962   34801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:29.373669   34801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44891
	I0829 19:24:29.373989   34801 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:29.374452   34801 main.go:141] libmachine: Using API Version  1
	I0829 19:24:29.374473   34801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:29.374852   34801 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:29.375022   34801 main.go:141] libmachine: (ha-505269-m04) Calling .GetIP
	I0829 19:24:29.377371   34801 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:24:29.377693   34801 main.go:141] libmachine: (ha-505269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:46:e7", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:21:15 +0000 UTC Type:0 Mac:52:54:00:44:46:e7 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-505269-m04 Clientid:01:52:54:00:44:46:e7}
	I0829 19:24:29.377714   34801 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined IP address 192.168.39.101 and MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:24:29.377859   34801 host.go:66] Checking if "ha-505269-m04" exists ...
	I0829 19:24:29.378147   34801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:29.378181   34801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:29.392232   34801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39697
	I0829 19:24:29.392682   34801 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:29.393057   34801 main.go:141] libmachine: Using API Version  1
	I0829 19:24:29.393074   34801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:29.393309   34801 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:29.393507   34801 main.go:141] libmachine: (ha-505269-m04) Calling .DriverName
	I0829 19:24:29.393776   34801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:24:29.393796   34801 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHHostname
	I0829 19:24:29.396442   34801 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:24:29.396918   34801 main.go:141] libmachine: (ha-505269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:46:e7", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:21:15 +0000 UTC Type:0 Mac:52:54:00:44:46:e7 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-505269-m04 Clientid:01:52:54:00:44:46:e7}
	I0829 19:24:29.396948   34801 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined IP address 192.168.39.101 and MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:24:29.397070   34801 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHPort
	I0829 19:24:29.397216   34801 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHKeyPath
	I0829 19:24:29.397336   34801 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHUsername
	I0829 19:24:29.397487   34801 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m04/id_rsa Username:docker}
	I0829 19:24:29.483034   34801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:24:29.499837   34801 status.go:257] ha-505269-m04 status: &{Name:ha-505269-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-505269 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-505269 -n ha-505269
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-505269 logs -n 25: (1.399826111s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-505269 cp ha-505269-m03:/home/docker/cp-test.txt                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3454359662/001/cp-test_ha-505269-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-505269 cp ha-505269-m03:/home/docker/cp-test.txt                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269:/home/docker/cp-test_ha-505269-m03_ha-505269.txt                       |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n ha-505269 sudo cat                                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | /home/docker/cp-test_ha-505269-m03_ha-505269.txt                                 |           |         |         |                     |                     |
	| cp      | ha-505269 cp ha-505269-m03:/home/docker/cp-test.txt                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m02:/home/docker/cp-test_ha-505269-m03_ha-505269-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n ha-505269-m02 sudo cat                                          | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | /home/docker/cp-test_ha-505269-m03_ha-505269-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-505269 cp ha-505269-m03:/home/docker/cp-test.txt                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m04:/home/docker/cp-test_ha-505269-m03_ha-505269-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n ha-505269-m04 sudo cat                                          | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | /home/docker/cp-test_ha-505269-m03_ha-505269-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-505269 cp testdata/cp-test.txt                                                | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-505269 cp ha-505269-m04:/home/docker/cp-test.txt                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3454359662/001/cp-test_ha-505269-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-505269 cp ha-505269-m04:/home/docker/cp-test.txt                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269:/home/docker/cp-test_ha-505269-m04_ha-505269.txt                       |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n ha-505269 sudo cat                                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | /home/docker/cp-test_ha-505269-m04_ha-505269.txt                                 |           |         |         |                     |                     |
	| cp      | ha-505269 cp ha-505269-m04:/home/docker/cp-test.txt                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m02:/home/docker/cp-test_ha-505269-m04_ha-505269-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n ha-505269-m02 sudo cat                                          | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | /home/docker/cp-test_ha-505269-m04_ha-505269-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-505269 cp ha-505269-m04:/home/docker/cp-test.txt                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m03:/home/docker/cp-test_ha-505269-m04_ha-505269-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n ha-505269-m03 sudo cat                                          | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | /home/docker/cp-test_ha-505269-m04_ha-505269-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-505269 node stop m02 -v=7                                                     | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 19:17:27
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 19:17:27.958759   29935 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:17:27.958993   29935 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:17:27.959001   29935 out.go:358] Setting ErrFile to fd 2...
	I0829 19:17:27.959005   29935 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:17:27.959153   29935 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 19:17:27.959679   29935 out.go:352] Setting JSON to false
	I0829 19:17:27.960463   29935 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3595,"bootTime":1724955453,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 19:17:27.960512   29935 start.go:139] virtualization: kvm guest
	I0829 19:17:27.962717   29935 out.go:177] * [ha-505269] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 19:17:27.964282   29935 out.go:177]   - MINIKUBE_LOCATION=19530
	I0829 19:17:27.964303   29935 notify.go:220] Checking for updates...
	I0829 19:17:27.966723   29935 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:17:27.967724   29935 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 19:17:27.968807   29935 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 19:17:27.969981   29935 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:17:27.971349   29935 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:17:27.972628   29935 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:17:28.006071   29935 out.go:177] * Using the kvm2 driver based on user configuration
	I0829 19:17:28.007378   29935 start.go:297] selected driver: kvm2
	I0829 19:17:28.007392   29935 start.go:901] validating driver "kvm2" against <nil>
	I0829 19:17:28.007402   29935 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:17:28.008073   29935 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:17:28.008132   29935 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19530-11185/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 19:17:28.022521   29935 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 19:17:28.022584   29935 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 19:17:28.022797   29935 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:17:28.022859   29935 cni.go:84] Creating CNI manager for ""
	I0829 19:17:28.022870   29935 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0829 19:17:28.022878   29935 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0829 19:17:28.022930   29935 start.go:340] cluster config:
	{Name:ha-505269 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-505269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0829 19:17:28.023016   29935 iso.go:125] acquiring lock: {Name:mk1c9d3ac7f423dd4657884e37bdf4359f6328d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:17:28.024781   29935 out.go:177] * Starting "ha-505269" primary control-plane node in "ha-505269" cluster
	I0829 19:17:28.025911   29935 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:17:28.025940   29935 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 19:17:28.025948   29935 cache.go:56] Caching tarball of preloaded images
	I0829 19:17:28.026018   29935 preload.go:172] Found /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 19:17:28.026028   29935 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 19:17:28.026375   29935 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/config.json ...
	I0829 19:17:28.026398   29935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/config.json: {Name:mk34432641dc2ac43cd81b2532b21cf90f88ce03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:17:28.026522   29935 start.go:360] acquireMachinesLock for ha-505269: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:17:28.026620   29935 start.go:364] duration metric: took 86.119µs to acquireMachinesLock for "ha-505269"
	I0829 19:17:28.026642   29935 start.go:93] Provisioning new machine with config: &{Name:ha-505269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-505269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:17:28.026696   29935 start.go:125] createHost starting for "" (driver="kvm2")
	I0829 19:17:28.028907   29935 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 19:17:28.029025   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:17:28.029059   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:17:28.043204   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33277
	I0829 19:17:28.043561   29935 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:17:28.044089   29935 main.go:141] libmachine: Using API Version  1
	I0829 19:17:28.044108   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:17:28.044416   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:17:28.044586   29935 main.go:141] libmachine: (ha-505269) Calling .GetMachineName
	I0829 19:17:28.044738   29935 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:17:28.044888   29935 start.go:159] libmachine.API.Create for "ha-505269" (driver="kvm2")
	I0829 19:17:28.044917   29935 client.go:168] LocalClient.Create starting
	I0829 19:17:28.044948   29935 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem
	I0829 19:17:28.044976   29935 main.go:141] libmachine: Decoding PEM data...
	I0829 19:17:28.044990   29935 main.go:141] libmachine: Parsing certificate...
	I0829 19:17:28.045045   29935 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem
	I0829 19:17:28.045069   29935 main.go:141] libmachine: Decoding PEM data...
	I0829 19:17:28.045080   29935 main.go:141] libmachine: Parsing certificate...
	I0829 19:17:28.045098   29935 main.go:141] libmachine: Running pre-create checks...
	I0829 19:17:28.045106   29935 main.go:141] libmachine: (ha-505269) Calling .PreCreateCheck
	I0829 19:17:28.045448   29935 main.go:141] libmachine: (ha-505269) Calling .GetConfigRaw
	I0829 19:17:28.045804   29935 main.go:141] libmachine: Creating machine...
	I0829 19:17:28.045815   29935 main.go:141] libmachine: (ha-505269) Calling .Create
	I0829 19:17:28.045939   29935 main.go:141] libmachine: (ha-505269) Creating KVM machine...
	I0829 19:17:28.047213   29935 main.go:141] libmachine: (ha-505269) DBG | found existing default KVM network
	I0829 19:17:28.047812   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:28.047704   29958 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0829 19:17:28.047885   29935 main.go:141] libmachine: (ha-505269) DBG | created network xml: 
	I0829 19:17:28.047908   29935 main.go:141] libmachine: (ha-505269) DBG | <network>
	I0829 19:17:28.047921   29935 main.go:141] libmachine: (ha-505269) DBG |   <name>mk-ha-505269</name>
	I0829 19:17:28.047940   29935 main.go:141] libmachine: (ha-505269) DBG |   <dns enable='no'/>
	I0829 19:17:28.047949   29935 main.go:141] libmachine: (ha-505269) DBG |   
	I0829 19:17:28.047971   29935 main.go:141] libmachine: (ha-505269) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0829 19:17:28.047982   29935 main.go:141] libmachine: (ha-505269) DBG |     <dhcp>
	I0829 19:17:28.047993   29935 main.go:141] libmachine: (ha-505269) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0829 19:17:28.048004   29935 main.go:141] libmachine: (ha-505269) DBG |     </dhcp>
	I0829 19:17:28.048016   29935 main.go:141] libmachine: (ha-505269) DBG |   </ip>
	I0829 19:17:28.048030   29935 main.go:141] libmachine: (ha-505269) DBG |   
	I0829 19:17:28.048038   29935 main.go:141] libmachine: (ha-505269) DBG | </network>
	I0829 19:17:28.048043   29935 main.go:141] libmachine: (ha-505269) DBG | 
	I0829 19:17:28.052763   29935 main.go:141] libmachine: (ha-505269) DBG | trying to create private KVM network mk-ha-505269 192.168.39.0/24...
	I0829 19:17:28.114139   29935 main.go:141] libmachine: (ha-505269) Setting up store path in /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269 ...
	I0829 19:17:28.114162   29935 main.go:141] libmachine: (ha-505269) DBG | private KVM network mk-ha-505269 192.168.39.0/24 created
	I0829 19:17:28.114170   29935 main.go:141] libmachine: (ha-505269) Building disk image from file:///home/jenkins/minikube-integration/19530-11185/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso
	I0829 19:17:28.114185   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:28.114084   29958 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 19:17:28.114275   29935 main.go:141] libmachine: (ha-505269) Downloading /home/jenkins/minikube-integration/19530-11185/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19530-11185/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso...
	I0829 19:17:28.356256   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:28.356158   29958 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa...
	I0829 19:17:28.649996   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:28.649851   29958 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/ha-505269.rawdisk...
	I0829 19:17:28.650032   29935 main.go:141] libmachine: (ha-505269) DBG | Writing magic tar header
	I0829 19:17:28.650043   29935 main.go:141] libmachine: (ha-505269) DBG | Writing SSH key tar header
	I0829 19:17:28.650059   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:28.649976   29958 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269 ...
	I0829 19:17:28.650078   29935 main.go:141] libmachine: (ha-505269) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269
	I0829 19:17:28.650107   29935 main.go:141] libmachine: (ha-505269) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269 (perms=drwx------)
	I0829 19:17:28.650122   29935 main.go:141] libmachine: (ha-505269) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube/machines
	I0829 19:17:28.650133   29935 main.go:141] libmachine: (ha-505269) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube/machines (perms=drwxr-xr-x)
	I0829 19:17:28.650143   29935 main.go:141] libmachine: (ha-505269) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube (perms=drwxr-xr-x)
	I0829 19:17:28.650153   29935 main.go:141] libmachine: (ha-505269) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185 (perms=drwxrwxr-x)
	I0829 19:17:28.650161   29935 main.go:141] libmachine: (ha-505269) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0829 19:17:28.650172   29935 main.go:141] libmachine: (ha-505269) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0829 19:17:28.650183   29935 main.go:141] libmachine: (ha-505269) Creating domain...
	I0829 19:17:28.650240   29935 main.go:141] libmachine: (ha-505269) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 19:17:28.650266   29935 main.go:141] libmachine: (ha-505269) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185
	I0829 19:17:28.650283   29935 main.go:141] libmachine: (ha-505269) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0829 19:17:28.650294   29935 main.go:141] libmachine: (ha-505269) DBG | Checking permissions on dir: /home/jenkins
	I0829 19:17:28.650306   29935 main.go:141] libmachine: (ha-505269) DBG | Checking permissions on dir: /home
	I0829 19:17:28.650316   29935 main.go:141] libmachine: (ha-505269) DBG | Skipping /home - not owner
	I0829 19:17:28.651241   29935 main.go:141] libmachine: (ha-505269) define libvirt domain using xml: 
	I0829 19:17:28.651257   29935 main.go:141] libmachine: (ha-505269) <domain type='kvm'>
	I0829 19:17:28.651277   29935 main.go:141] libmachine: (ha-505269)   <name>ha-505269</name>
	I0829 19:17:28.651292   29935 main.go:141] libmachine: (ha-505269)   <memory unit='MiB'>2200</memory>
	I0829 19:17:28.651313   29935 main.go:141] libmachine: (ha-505269)   <vcpu>2</vcpu>
	I0829 19:17:28.651322   29935 main.go:141] libmachine: (ha-505269)   <features>
	I0829 19:17:28.651328   29935 main.go:141] libmachine: (ha-505269)     <acpi/>
	I0829 19:17:28.651331   29935 main.go:141] libmachine: (ha-505269)     <apic/>
	I0829 19:17:28.651336   29935 main.go:141] libmachine: (ha-505269)     <pae/>
	I0829 19:17:28.651344   29935 main.go:141] libmachine: (ha-505269)     
	I0829 19:17:28.651350   29935 main.go:141] libmachine: (ha-505269)   </features>
	I0829 19:17:28.651354   29935 main.go:141] libmachine: (ha-505269)   <cpu mode='host-passthrough'>
	I0829 19:17:28.651361   29935 main.go:141] libmachine: (ha-505269)   
	I0829 19:17:28.651370   29935 main.go:141] libmachine: (ha-505269)   </cpu>
	I0829 19:17:28.651392   29935 main.go:141] libmachine: (ha-505269)   <os>
	I0829 19:17:28.651414   29935 main.go:141] libmachine: (ha-505269)     <type>hvm</type>
	I0829 19:17:28.651444   29935 main.go:141] libmachine: (ha-505269)     <boot dev='cdrom'/>
	I0829 19:17:28.651464   29935 main.go:141] libmachine: (ha-505269)     <boot dev='hd'/>
	I0829 19:17:28.651478   29935 main.go:141] libmachine: (ha-505269)     <bootmenu enable='no'/>
	I0829 19:17:28.651486   29935 main.go:141] libmachine: (ha-505269)   </os>
	I0829 19:17:28.651497   29935 main.go:141] libmachine: (ha-505269)   <devices>
	I0829 19:17:28.651506   29935 main.go:141] libmachine: (ha-505269)     <disk type='file' device='cdrom'>
	I0829 19:17:28.651519   29935 main.go:141] libmachine: (ha-505269)       <source file='/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/boot2docker.iso'/>
	I0829 19:17:28.651530   29935 main.go:141] libmachine: (ha-505269)       <target dev='hdc' bus='scsi'/>
	I0829 19:17:28.651543   29935 main.go:141] libmachine: (ha-505269)       <readonly/>
	I0829 19:17:28.651556   29935 main.go:141] libmachine: (ha-505269)     </disk>
	I0829 19:17:28.651573   29935 main.go:141] libmachine: (ha-505269)     <disk type='file' device='disk'>
	I0829 19:17:28.651588   29935 main.go:141] libmachine: (ha-505269)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0829 19:17:28.651605   29935 main.go:141] libmachine: (ha-505269)       <source file='/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/ha-505269.rawdisk'/>
	I0829 19:17:28.651616   29935 main.go:141] libmachine: (ha-505269)       <target dev='hda' bus='virtio'/>
	I0829 19:17:28.651626   29935 main.go:141] libmachine: (ha-505269)     </disk>
	I0829 19:17:28.651637   29935 main.go:141] libmachine: (ha-505269)     <interface type='network'>
	I0829 19:17:28.651649   29935 main.go:141] libmachine: (ha-505269)       <source network='mk-ha-505269'/>
	I0829 19:17:28.651657   29935 main.go:141] libmachine: (ha-505269)       <model type='virtio'/>
	I0829 19:17:28.651669   29935 main.go:141] libmachine: (ha-505269)     </interface>
	I0829 19:17:28.651680   29935 main.go:141] libmachine: (ha-505269)     <interface type='network'>
	I0829 19:17:28.651691   29935 main.go:141] libmachine: (ha-505269)       <source network='default'/>
	I0829 19:17:28.651701   29935 main.go:141] libmachine: (ha-505269)       <model type='virtio'/>
	I0829 19:17:28.651711   29935 main.go:141] libmachine: (ha-505269)     </interface>
	I0829 19:17:28.651722   29935 main.go:141] libmachine: (ha-505269)     <serial type='pty'>
	I0829 19:17:28.651736   29935 main.go:141] libmachine: (ha-505269)       <target port='0'/>
	I0829 19:17:28.651751   29935 main.go:141] libmachine: (ha-505269)     </serial>
	I0829 19:17:28.651762   29935 main.go:141] libmachine: (ha-505269)     <console type='pty'>
	I0829 19:17:28.651773   29935 main.go:141] libmachine: (ha-505269)       <target type='serial' port='0'/>
	I0829 19:17:28.651783   29935 main.go:141] libmachine: (ha-505269)     </console>
	I0829 19:17:28.651791   29935 main.go:141] libmachine: (ha-505269)     <rng model='virtio'>
	I0829 19:17:28.651803   29935 main.go:141] libmachine: (ha-505269)       <backend model='random'>/dev/random</backend>
	I0829 19:17:28.651818   29935 main.go:141] libmachine: (ha-505269)     </rng>
	I0829 19:17:28.651832   29935 main.go:141] libmachine: (ha-505269)     
	I0829 19:17:28.651848   29935 main.go:141] libmachine: (ha-505269)     
	I0829 19:17:28.651862   29935 main.go:141] libmachine: (ha-505269)   </devices>
	I0829 19:17:28.651877   29935 main.go:141] libmachine: (ha-505269) </domain>
	I0829 19:17:28.651896   29935 main.go:141] libmachine: (ha-505269) 
	I0829 19:17:28.655845   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:85:87:73 in network default
	I0829 19:17:28.656347   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:28.656364   29935 main.go:141] libmachine: (ha-505269) Ensuring networks are active...
	I0829 19:17:28.657024   29935 main.go:141] libmachine: (ha-505269) Ensuring network default is active
	I0829 19:17:28.657477   29935 main.go:141] libmachine: (ha-505269) Ensuring network mk-ha-505269 is active
	I0829 19:17:28.657900   29935 main.go:141] libmachine: (ha-505269) Getting domain xml...
	I0829 19:17:28.658560   29935 main.go:141] libmachine: (ha-505269) Creating domain...
	I0829 19:17:29.824690   29935 main.go:141] libmachine: (ha-505269) Waiting to get IP...
	I0829 19:17:29.825538   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:29.825902   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find current IP address of domain ha-505269 in network mk-ha-505269
	I0829 19:17:29.825944   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:29.825894   29958 retry.go:31] will retry after 209.089865ms: waiting for machine to come up
	I0829 19:17:30.036215   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:30.036692   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find current IP address of domain ha-505269 in network mk-ha-505269
	I0829 19:17:30.036725   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:30.036637   29958 retry.go:31] will retry after 385.664286ms: waiting for machine to come up
	I0829 19:17:30.424200   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:30.424631   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find current IP address of domain ha-505269 in network mk-ha-505269
	I0829 19:17:30.424657   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:30.424594   29958 retry.go:31] will retry after 332.943452ms: waiting for machine to come up
	I0829 19:17:30.759309   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:30.759697   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find current IP address of domain ha-505269 in network mk-ha-505269
	I0829 19:17:30.759749   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:30.759675   29958 retry.go:31] will retry after 551.728849ms: waiting for machine to come up
	I0829 19:17:31.313333   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:31.313786   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find current IP address of domain ha-505269 in network mk-ha-505269
	I0829 19:17:31.313819   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:31.313733   29958 retry.go:31] will retry after 590.108729ms: waiting for machine to come up
	I0829 19:17:31.905369   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:31.905777   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find current IP address of domain ha-505269 in network mk-ha-505269
	I0829 19:17:31.905808   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:31.905700   29958 retry.go:31] will retry after 758.24211ms: waiting for machine to come up
	I0829 19:17:32.665089   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:32.665517   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find current IP address of domain ha-505269 in network mk-ha-505269
	I0829 19:17:32.665537   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:32.665487   29958 retry.go:31] will retry after 1.1487724s: waiting for machine to come up
	I0829 19:17:33.815411   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:33.815895   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find current IP address of domain ha-505269 in network mk-ha-505269
	I0829 19:17:33.815922   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:33.815813   29958 retry.go:31] will retry after 1.369495463s: waiting for machine to come up
	I0829 19:17:35.187412   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:35.187770   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find current IP address of domain ha-505269 in network mk-ha-505269
	I0829 19:17:35.187797   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:35.187722   29958 retry.go:31] will retry after 1.413323486s: waiting for machine to come up
	I0829 19:17:36.602212   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:36.602607   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find current IP address of domain ha-505269 in network mk-ha-505269
	I0829 19:17:36.602630   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:36.602569   29958 retry.go:31] will retry after 1.621601438s: waiting for machine to come up
	I0829 19:17:38.226589   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:38.227022   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find current IP address of domain ha-505269 in network mk-ha-505269
	I0829 19:17:38.227043   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:38.226989   29958 retry.go:31] will retry after 2.51318315s: waiting for machine to come up
	I0829 19:17:40.742522   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:40.742929   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find current IP address of domain ha-505269 in network mk-ha-505269
	I0829 19:17:40.742959   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:40.742879   29958 retry.go:31] will retry after 2.859959482s: waiting for machine to come up
	I0829 19:17:43.604815   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:43.605190   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find current IP address of domain ha-505269 in network mk-ha-505269
	I0829 19:17:43.605218   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:43.605152   29958 retry.go:31] will retry after 3.832874093s: waiting for machine to come up
	I0829 19:17:47.439131   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:47.439478   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find current IP address of domain ha-505269 in network mk-ha-505269
	I0829 19:17:47.439500   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:47.439441   29958 retry.go:31] will retry after 3.719809687s: waiting for machine to come up
	I0829 19:17:51.162936   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:51.163407   29935 main.go:141] libmachine: (ha-505269) Found IP for machine: 192.168.39.56
	I0829 19:17:51.163420   29935 main.go:141] libmachine: (ha-505269) Reserving static IP address...
	I0829 19:17:51.163429   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has current primary IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:51.163727   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find host DHCP lease matching {name: "ha-505269", mac: "52:54:00:5e:63:25", ip: "192.168.39.56"} in network mk-ha-505269
	I0829 19:17:51.232076   29935 main.go:141] libmachine: (ha-505269) DBG | Getting to WaitForSSH function...
	I0829 19:17:51.232105   29935 main.go:141] libmachine: (ha-505269) Reserved static IP address: 192.168.39.56
	I0829 19:17:51.232124   29935 main.go:141] libmachine: (ha-505269) Waiting for SSH to be available...
	I0829 19:17:51.234364   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:51.234790   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269
	I0829 19:17:51.234814   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find defined IP address of network mk-ha-505269 interface with MAC address 52:54:00:5e:63:25
	I0829 19:17:51.234892   29935 main.go:141] libmachine: (ha-505269) DBG | Using SSH client type: external
	I0829 19:17:51.234913   29935 main.go:141] libmachine: (ha-505269) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa (-rw-------)
	I0829 19:17:51.234981   29935 main.go:141] libmachine: (ha-505269) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:17:51.234995   29935 main.go:141] libmachine: (ha-505269) DBG | About to run SSH command:
	I0829 19:17:51.235005   29935 main.go:141] libmachine: (ha-505269) DBG | exit 0
	I0829 19:17:51.238355   29935 main.go:141] libmachine: (ha-505269) DBG | SSH cmd err, output: exit status 255: 
	I0829 19:17:51.238377   29935 main.go:141] libmachine: (ha-505269) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0829 19:17:51.238388   29935 main.go:141] libmachine: (ha-505269) DBG | command : exit 0
	I0829 19:17:51.238402   29935 main.go:141] libmachine: (ha-505269) DBG | err     : exit status 255
	I0829 19:17:51.238422   29935 main.go:141] libmachine: (ha-505269) DBG | output  : 
	I0829 19:17:54.239544   29935 main.go:141] libmachine: (ha-505269) DBG | Getting to WaitForSSH function...
	I0829 19:17:54.241704   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.242064   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:54.242092   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.242190   29935 main.go:141] libmachine: (ha-505269) DBG | Using SSH client type: external
	I0829 19:17:54.242214   29935 main.go:141] libmachine: (ha-505269) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa (-rw-------)
	I0829 19:17:54.242252   29935 main.go:141] libmachine: (ha-505269) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.56 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:17:54.242265   29935 main.go:141] libmachine: (ha-505269) DBG | About to run SSH command:
	I0829 19:17:54.242276   29935 main.go:141] libmachine: (ha-505269) DBG | exit 0
	I0829 19:17:54.370601   29935 main.go:141] libmachine: (ha-505269) DBG | SSH cmd err, output: <nil>: 
	I0829 19:17:54.370834   29935 main.go:141] libmachine: (ha-505269) KVM machine creation complete!
	I0829 19:17:54.371212   29935 main.go:141] libmachine: (ha-505269) Calling .GetConfigRaw
	I0829 19:17:54.371764   29935 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:17:54.371959   29935 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:17:54.372177   29935 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0829 19:17:54.372192   29935 main.go:141] libmachine: (ha-505269) Calling .GetState
	I0829 19:17:54.373369   29935 main.go:141] libmachine: Detecting operating system of created instance...
	I0829 19:17:54.373391   29935 main.go:141] libmachine: Waiting for SSH to be available...
	I0829 19:17:54.373399   29935 main.go:141] libmachine: Getting to WaitForSSH function...
	I0829 19:17:54.373410   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:17:54.375532   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.375839   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:54.375877   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.375995   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:17:54.376146   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:54.376281   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:54.376384   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:17:54.376588   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:17:54.376779   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0829 19:17:54.376792   29935 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0829 19:17:54.489891   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:17:54.489914   29935 main.go:141] libmachine: Detecting the provisioner...
	I0829 19:17:54.489922   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:17:54.492480   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.492755   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:54.492781   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.492925   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:17:54.493108   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:54.493265   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:54.493413   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:17:54.493580   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:17:54.493767   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0829 19:17:54.493778   29935 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0829 19:17:54.607475   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0829 19:17:54.607587   29935 main.go:141] libmachine: found compatible host: buildroot
	I0829 19:17:54.607601   29935 main.go:141] libmachine: Provisioning with buildroot...
	I0829 19:17:54.607612   29935 main.go:141] libmachine: (ha-505269) Calling .GetMachineName
	I0829 19:17:54.607844   29935 buildroot.go:166] provisioning hostname "ha-505269"
	I0829 19:17:54.607866   29935 main.go:141] libmachine: (ha-505269) Calling .GetMachineName
	I0829 19:17:54.608055   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:17:54.610330   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.610666   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:54.610687   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.610815   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:17:54.610967   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:54.611127   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:54.611243   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:17:54.611365   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:17:54.611529   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0829 19:17:54.611540   29935 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-505269 && echo "ha-505269" | sudo tee /etc/hostname
	I0829 19:17:54.736836   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-505269
	
	I0829 19:17:54.736866   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:17:54.739230   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.739526   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:54.739570   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.739697   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:17:54.739878   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:54.740044   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:54.740185   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:17:54.740324   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:17:54.740515   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0829 19:17:54.740532   29935 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-505269' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-505269/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-505269' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:17:54.859355   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:17:54.859408   29935 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 19:17:54.859433   29935 buildroot.go:174] setting up certificates
	I0829 19:17:54.859442   29935 provision.go:84] configureAuth start
	I0829 19:17:54.859451   29935 main.go:141] libmachine: (ha-505269) Calling .GetMachineName
	I0829 19:17:54.859732   29935 main.go:141] libmachine: (ha-505269) Calling .GetIP
	I0829 19:17:54.862498   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.862854   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:54.862876   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.863028   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:17:54.865121   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.865469   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:54.865494   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.865593   29935 provision.go:143] copyHostCerts
	I0829 19:17:54.865624   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 19:17:54.865658   29935 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 19:17:54.865674   29935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 19:17:54.865751   29935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 19:17:54.865847   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 19:17:54.865866   29935 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 19:17:54.865873   29935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 19:17:54.865898   29935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 19:17:54.865955   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 19:17:54.865977   29935 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 19:17:54.865983   29935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 19:17:54.866005   29935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 19:17:54.866056   29935 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.ha-505269 san=[127.0.0.1 192.168.39.56 ha-505269 localhost minikube]
	I0829 19:17:54.994896   29935 provision.go:177] copyRemoteCerts
	I0829 19:17:54.994948   29935 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:17:54.994969   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:17:54.997280   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.997563   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:54.997581   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.997741   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:17:54.997908   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:54.998043   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:17:54.998144   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:17:55.084371   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0829 19:17:55.084440   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 19:17:55.108256   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0829 19:17:55.108346   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0829 19:17:55.132778   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0829 19:17:55.132866   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 19:17:55.156154   29935 provision.go:87] duration metric: took 296.700657ms to configureAuth
	I0829 19:17:55.156184   29935 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:17:55.156382   29935 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:17:55.156496   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:17:55.158891   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.159239   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:55.159266   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.159388   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:17:55.159543   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:55.159709   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:55.159825   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:17:55.159969   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:17:55.160113   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0829 19:17:55.160129   29935 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:17:55.381545   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:17:55.381571   29935 main.go:141] libmachine: Checking connection to Docker...
	I0829 19:17:55.381578   29935 main.go:141] libmachine: (ha-505269) Calling .GetURL
	I0829 19:17:55.382693   29935 main.go:141] libmachine: (ha-505269) DBG | Using libvirt version 6000000
	I0829 19:17:55.384881   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.385204   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:55.385229   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.385420   29935 main.go:141] libmachine: Docker is up and running!
	I0829 19:17:55.385431   29935 main.go:141] libmachine: Reticulating splines...
	I0829 19:17:55.385436   29935 client.go:171] duration metric: took 27.340514063s to LocalClient.Create
	I0829 19:17:55.385457   29935 start.go:167] duration metric: took 27.340568977s to libmachine.API.Create "ha-505269"
	I0829 19:17:55.385470   29935 start.go:293] postStartSetup for "ha-505269" (driver="kvm2")
	I0829 19:17:55.385482   29935 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:17:55.385497   29935 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:17:55.385708   29935 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:17:55.385730   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:17:55.387943   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.388294   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:55.388322   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.388450   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:17:55.388625   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:55.388773   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:17:55.388901   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:17:55.472984   29935 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:17:55.477127   29935 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:17:55.477153   29935 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 19:17:55.477212   29935 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 19:17:55.477301   29935 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 19:17:55.477314   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> /etc/ssl/certs/183612.pem
	I0829 19:17:55.477442   29935 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:17:55.486988   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 19:17:55.510633   29935 start.go:296] duration metric: took 125.149206ms for postStartSetup
	I0829 19:17:55.510691   29935 main.go:141] libmachine: (ha-505269) Calling .GetConfigRaw
	I0829 19:17:55.511257   29935 main.go:141] libmachine: (ha-505269) Calling .GetIP
	I0829 19:17:55.513588   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.513891   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:55.513924   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.514170   29935 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/config.json ...
	I0829 19:17:55.514363   29935 start.go:128] duration metric: took 27.487658045s to createHost
	I0829 19:17:55.514392   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:17:55.516340   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.516606   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:55.516628   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.516776   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:17:55.516934   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:55.517088   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:55.517212   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:17:55.517359   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:17:55.517516   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0829 19:17:55.517525   29935 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:17:55.631228   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724959075.612760705
	
	I0829 19:17:55.631253   29935 fix.go:216] guest clock: 1724959075.612760705
	I0829 19:17:55.631263   29935 fix.go:229] Guest: 2024-08-29 19:17:55.612760705 +0000 UTC Remote: 2024-08-29 19:17:55.514381393 +0000 UTC m=+27.588833608 (delta=98.379312ms)
	I0829 19:17:55.631284   29935 fix.go:200] guest clock delta is within tolerance: 98.379312ms
	I0829 19:17:55.631294   29935 start.go:83] releasing machines lock for "ha-505269", held for 27.604662263s
	I0829 19:17:55.631312   29935 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:17:55.631547   29935 main.go:141] libmachine: (ha-505269) Calling .GetIP
	I0829 19:17:55.634015   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.634362   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:55.634391   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.634548   29935 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:17:55.635006   29935 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:17:55.635169   29935 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:17:55.635232   29935 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:17:55.635270   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:17:55.635375   29935 ssh_runner.go:195] Run: cat /version.json
	I0829 19:17:55.635389   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:17:55.637698   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.638012   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:55.638046   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.638078   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.638187   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:17:55.638343   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:55.638466   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:17:55.638496   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:55.638521   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.638612   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:17:55.638667   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:17:55.638828   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:55.638982   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:17:55.639103   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:17:55.739321   29935 ssh_runner.go:195] Run: systemctl --version
	I0829 19:17:55.745202   29935 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:17:55.906765   29935 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:17:55.912628   29935 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:17:55.912700   29935 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:17:55.932034   29935 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:17:55.932058   29935 start.go:495] detecting cgroup driver to use...
	I0829 19:17:55.932112   29935 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:17:55.950478   29935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:17:55.964970   29935 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:17:55.965046   29935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:17:55.978970   29935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:17:55.992754   29935 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:17:56.109240   29935 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:17:56.268373   29935 docker.go:233] disabling docker service ...
	I0829 19:17:56.268442   29935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:17:56.282829   29935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:17:56.295586   29935 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:17:56.424098   29935 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:17:56.547017   29935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:17:56.560904   29935 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:17:56.579199   29935 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:17:56.579264   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:17:56.589967   29935 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:17:56.590032   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:17:56.600618   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:17:56.611011   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:17:56.621958   29935 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:17:56.632864   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:17:56.643306   29935 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:17:56.660236   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:17:56.670897   29935 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:17:56.680531   29935 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:17:56.680589   29935 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:17:56.694018   29935 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:17:56.703828   29935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:17:56.825049   29935 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:17:56.917042   29935 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:17:56.917122   29935 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:17:56.921912   29935 start.go:563] Will wait 60s for crictl version
	I0829 19:17:56.921962   29935 ssh_runner.go:195] Run: which crictl
	I0829 19:17:56.925614   29935 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:17:56.964597   29935 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:17:56.964676   29935 ssh_runner.go:195] Run: crio --version
	I0829 19:17:56.992131   29935 ssh_runner.go:195] Run: crio --version
	I0829 19:17:57.023140   29935 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:17:57.024648   29935 main.go:141] libmachine: (ha-505269) Calling .GetIP
	I0829 19:17:57.027385   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:57.027744   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:57.027772   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:57.027913   29935 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 19:17:57.032162   29935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:17:57.046050   29935 kubeadm.go:883] updating cluster {Name:ha-505269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-505269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:17:57.046269   29935 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:17:57.046515   29935 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:17:57.078217   29935 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 19:17:57.078284   29935 ssh_runner.go:195] Run: which lz4
	I0829 19:17:57.082210   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0829 19:17:57.082290   29935 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 19:17:57.086414   29935 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 19:17:57.086436   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 19:17:58.424209   29935 crio.go:462] duration metric: took 1.341938036s to copy over tarball
	I0829 19:17:58.424290   29935 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 19:18:00.437807   29935 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.013487447s)
	I0829 19:18:00.437843   29935 crio.go:469] duration metric: took 2.013606568s to extract the tarball
	I0829 19:18:00.437852   29935 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 19:18:00.474664   29935 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:18:00.521002   29935 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:18:00.521024   29935 cache_images.go:84] Images are preloaded, skipping loading
	I0829 19:18:00.521031   29935 kubeadm.go:934] updating node { 192.168.39.56 8443 v1.31.0 crio true true} ...
	I0829 19:18:00.521160   29935 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-505269 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-505269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:18:00.521257   29935 ssh_runner.go:195] Run: crio config
	I0829 19:18:00.565829   29935 cni.go:84] Creating CNI manager for ""
	I0829 19:18:00.565849   29935 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0829 19:18:00.565864   29935 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:18:00.565894   29935 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.56 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-505269 NodeName:ha-505269 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:18:00.566069   29935 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.56
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-505269"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.56
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.56"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:18:00.566098   29935 kube-vip.go:115] generating kube-vip config ...
	I0829 19:18:00.566150   29935 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0829 19:18:00.584228   29935 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0829 19:18:00.584340   29935 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0829 19:18:00.584392   29935 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:18:00.594198   29935 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:18:00.594248   29935 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0829 19:18:00.603461   29935 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0829 19:18:00.619940   29935 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:18:00.638970   29935 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0829 19:18:00.655996   29935 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0829 19:18:00.672958   29935 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0829 19:18:00.677018   29935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:18:00.688883   29935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:18:00.800177   29935 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:18:00.816756   29935 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269 for IP: 192.168.39.56
	I0829 19:18:00.816778   29935 certs.go:194] generating shared ca certs ...
	I0829 19:18:00.816791   29935 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:18:00.816957   29935 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 19:18:00.817019   29935 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 19:18:00.817033   29935 certs.go:256] generating profile certs ...
	I0829 19:18:00.817083   29935 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.key
	I0829 19:18:00.817110   29935 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.crt with IP's: []
	I0829 19:18:00.940108   29935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.crt ...
	I0829 19:18:00.940131   29935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.crt: {Name:mk431a4ed0d72f13a92734082de436c232306a7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:18:00.940319   29935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.key ...
	I0829 19:18:00.940335   29935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.key: {Name:mk4d31a534edf74fc14738154db3aebf4d68236c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:18:00.940435   29935 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.ca1201dc
	I0829 19:18:00.940455   29935 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.ca1201dc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.56 192.168.39.254]
	I0829 19:18:01.119912   29935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.ca1201dc ...
	I0829 19:18:01.119941   29935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.ca1201dc: {Name:mk6b841224bb564430ad1d214971521b1b1d96df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:18:01.120116   29935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.ca1201dc ...
	I0829 19:18:01.120131   29935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.ca1201dc: {Name:mkaef83e7776af706be997c5d3daca14b348913a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:18:01.120228   29935 certs.go:381] copying /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.ca1201dc -> /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt
	I0829 19:18:01.120345   29935 certs.go:385] copying /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.ca1201dc -> /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key
	I0829 19:18:01.120427   29935 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.key
	I0829 19:18:01.120446   29935 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.crt with IP's: []
	I0829 19:18:01.262965   29935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.crt ...
	I0829 19:18:01.262993   29935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.crt: {Name:mk3ba36e71f82511845306cdb8499effc15a4084 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:18:01.263171   29935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.key ...
	I0829 19:18:01.263188   29935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.key: {Name:mk57598d6159b95fb72f606eb5dac76361e83839 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:18:01.263284   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0829 19:18:01.263305   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0829 19:18:01.263319   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0829 19:18:01.263338   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0829 19:18:01.263357   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0829 19:18:01.263376   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0829 19:18:01.263397   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0829 19:18:01.263410   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0829 19:18:01.263463   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 19:18:01.263516   29935 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 19:18:01.263528   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 19:18:01.263561   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 19:18:01.263601   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:18:01.263633   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 19:18:01.263686   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 19:18:01.263729   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:18:01.263749   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem -> /usr/share/ca-certificates/18361.pem
	I0829 19:18:01.263767   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> /usr/share/ca-certificates/183612.pem
	I0829 19:18:01.264343   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:18:01.291491   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 19:18:01.317520   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:18:01.343084   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:18:01.368653   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0829 19:18:01.394069   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 19:18:01.419825   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:18:01.445128   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:18:01.471123   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:18:01.496657   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 19:18:01.522468   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 19:18:01.547552   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:18:01.564516   29935 ssh_runner.go:195] Run: openssl version
	I0829 19:18:01.570510   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:18:01.584760   29935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:18:01.589572   29935 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:18:01.589643   29935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:18:01.597434   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:18:01.609175   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 19:18:01.626990   29935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 19:18:01.632048   29935 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 19:18:01.632091   29935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 19:18:01.638820   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 19:18:01.655777   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 19:18:01.666565   29935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 19:18:01.671251   29935 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 19:18:01.671304   29935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 19:18:01.677127   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:18:01.689086   29935 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:18:01.693262   29935 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 19:18:01.693317   29935 kubeadm.go:392] StartCluster: {Name:ha-505269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-505269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:18:01.693393   29935 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:18:01.693474   29935 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:18:01.731104   29935 cri.go:89] found id: ""
	I0829 19:18:01.731174   29935 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:18:01.741306   29935 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:18:01.750983   29935 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:18:01.761155   29935 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:18:01.761178   29935 kubeadm.go:157] found existing configuration files:
	
	I0829 19:18:01.761228   29935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:18:01.770354   29935 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:18:01.770436   29935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:18:01.780505   29935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:18:01.790109   29935 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:18:01.790170   29935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:18:01.799982   29935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:18:01.809003   29935 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:18:01.809075   29935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:18:01.818607   29935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:18:01.827641   29935 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:18:01.827708   29935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:18:01.837298   29935 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:18:01.951203   29935 kubeadm.go:310] W0829 19:18:01.933236     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:18:01.951535   29935 kubeadm.go:310] W0829 19:18:01.934313     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:18:02.053114   29935 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:18:15.018731   29935 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 19:18:15.018799   29935 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:18:15.018918   29935 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:18:15.019063   29935 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:18:15.019206   29935 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 19:18:15.019291   29935 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:18:15.020690   29935 out.go:235]   - Generating certificates and keys ...
	I0829 19:18:15.020788   29935 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:18:15.020875   29935 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:18:15.020977   29935 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0829 19:18:15.021081   29935 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0829 19:18:15.021175   29935 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0829 19:18:15.021246   29935 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0829 19:18:15.021318   29935 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0829 19:18:15.021466   29935 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-505269 localhost] and IPs [192.168.39.56 127.0.0.1 ::1]
	I0829 19:18:15.021526   29935 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0829 19:18:15.021621   29935 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-505269 localhost] and IPs [192.168.39.56 127.0.0.1 ::1]
	I0829 19:18:15.021682   29935 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0829 19:18:15.021735   29935 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0829 19:18:15.021773   29935 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0829 19:18:15.021822   29935 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:18:15.021874   29935 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:18:15.021932   29935 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 19:18:15.022004   29935 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:18:15.022069   29935 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:18:15.022116   29935 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:18:15.022183   29935 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:18:15.022238   29935 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:18:15.023529   29935 out.go:235]   - Booting up control plane ...
	I0829 19:18:15.023609   29935 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:18:15.023674   29935 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:18:15.023730   29935 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:18:15.023817   29935 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:18:15.023909   29935 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:18:15.023969   29935 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:18:15.024091   29935 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 19:18:15.024185   29935 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 19:18:15.024244   29935 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.12668ms
	I0829 19:18:15.024313   29935 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 19:18:15.024365   29935 kubeadm.go:310] [api-check] The API server is healthy after 9.064092806s
	I0829 19:18:15.024457   29935 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 19:18:15.024567   29935 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 19:18:15.024624   29935 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 19:18:15.024819   29935 kubeadm.go:310] [mark-control-plane] Marking the node ha-505269 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 19:18:15.024879   29935 kubeadm.go:310] [bootstrap-token] Using token: dngxmm.tc10434umf6x6rzl
	I0829 19:18:15.026265   29935 out.go:235]   - Configuring RBAC rules ...
	I0829 19:18:15.026367   29935 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 19:18:15.026447   29935 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 19:18:15.026588   29935 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 19:18:15.026704   29935 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 19:18:15.026802   29935 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 19:18:15.026877   29935 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 19:18:15.026985   29935 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 19:18:15.027022   29935 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 19:18:15.027061   29935 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 19:18:15.027066   29935 kubeadm.go:310] 
	I0829 19:18:15.027115   29935 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 19:18:15.027121   29935 kubeadm.go:310] 
	I0829 19:18:15.027189   29935 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 19:18:15.027195   29935 kubeadm.go:310] 
	I0829 19:18:15.027219   29935 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 19:18:15.027271   29935 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 19:18:15.027316   29935 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 19:18:15.027322   29935 kubeadm.go:310] 
	I0829 19:18:15.027371   29935 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 19:18:15.027377   29935 kubeadm.go:310] 
	I0829 19:18:15.027419   29935 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 19:18:15.027426   29935 kubeadm.go:310] 
	I0829 19:18:15.027468   29935 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 19:18:15.027544   29935 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 19:18:15.027617   29935 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 19:18:15.027627   29935 kubeadm.go:310] 
	I0829 19:18:15.027697   29935 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 19:18:15.027760   29935 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 19:18:15.027765   29935 kubeadm.go:310] 
	I0829 19:18:15.027831   29935 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token dngxmm.tc10434umf6x6rzl \
	I0829 19:18:15.027919   29935 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef \
	I0829 19:18:15.027939   29935 kubeadm.go:310] 	--control-plane 
	I0829 19:18:15.027942   29935 kubeadm.go:310] 
	I0829 19:18:15.028024   29935 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 19:18:15.028036   29935 kubeadm.go:310] 
	I0829 19:18:15.028170   29935 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dngxmm.tc10434umf6x6rzl \
	I0829 19:18:15.028269   29935 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef 
	I0829 19:18:15.028284   29935 cni.go:84] Creating CNI manager for ""
	I0829 19:18:15.028291   29935 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0829 19:18:15.029603   29935 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0829 19:18:15.030662   29935 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0829 19:18:15.036448   29935 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0829 19:18:15.036462   29935 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0829 19:18:15.059690   29935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0829 19:18:15.467740   29935 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 19:18:15.467808   29935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:18:15.467809   29935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-505269 minikube.k8s.io/updated_at=2024_08_29T19_18_15_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033 minikube.k8s.io/name=ha-505269 minikube.k8s.io/primary=true
	I0829 19:18:15.607863   29935 ops.go:34] apiserver oom_adj: -16
	I0829 19:18:15.645301   29935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:18:16.146183   29935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:18:16.239962   29935 kubeadm.go:1113] duration metric: took 772.210554ms to wait for elevateKubeSystemPrivileges
	I0829 19:18:16.239995   29935 kubeadm.go:394] duration metric: took 14.546680574s to StartCluster
	I0829 19:18:16.240016   29935 settings.go:142] acquiring lock: {Name:mka4cd5ddff5796cd0ca11509c181178f4f73529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:18:16.240086   29935 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 19:18:16.240743   29935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:18:16.240974   29935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0829 19:18:16.240981   29935 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:18:16.241001   29935 start.go:241] waiting for startup goroutines ...
	I0829 19:18:16.241009   29935 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 19:18:16.241067   29935 addons.go:69] Setting storage-provisioner=true in profile "ha-505269"
	I0829 19:18:16.241083   29935 addons.go:69] Setting default-storageclass=true in profile "ha-505269"
	I0829 19:18:16.241140   29935 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-505269"
	I0829 19:18:16.241094   29935 addons.go:234] Setting addon storage-provisioner=true in "ha-505269"
	I0829 19:18:16.241210   29935 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:18:16.241220   29935 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:18:16.241511   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:18:16.241540   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:18:16.241622   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:18:16.241660   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:18:16.256286   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34699
	I0829 19:18:16.256706   29935 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:18:16.257236   29935 main.go:141] libmachine: Using API Version  1
	I0829 19:18:16.257257   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:18:16.257551   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:18:16.257779   29935 main.go:141] libmachine: (ha-505269) Calling .GetState
	I0829 19:18:16.259921   29935 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 19:18:16.260229   29935 kapi.go:59] client config for ha-505269: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.crt", KeyFile:"/home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.key", CAFile:"/home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0829 19:18:16.260393   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43955
	I0829 19:18:16.260745   29935 cert_rotation.go:140] Starting client certificate rotation controller
	I0829 19:18:16.260778   29935 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:18:16.261032   29935 addons.go:234] Setting addon default-storageclass=true in "ha-505269"
	I0829 19:18:16.261071   29935 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:18:16.261293   29935 main.go:141] libmachine: Using API Version  1
	I0829 19:18:16.261317   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:18:16.261450   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:18:16.261478   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:18:16.261638   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:18:16.262107   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:18:16.262128   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:18:16.275995   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40035
	I0829 19:18:16.276075   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43511
	I0829 19:18:16.276413   29935 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:18:16.276528   29935 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:18:16.276895   29935 main.go:141] libmachine: Using API Version  1
	I0829 19:18:16.276911   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:18:16.277018   29935 main.go:141] libmachine: Using API Version  1
	I0829 19:18:16.277041   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:18:16.277280   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:18:16.277322   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:18:16.277453   29935 main.go:141] libmachine: (ha-505269) Calling .GetState
	I0829 19:18:16.277871   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:18:16.277910   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:18:16.278885   29935 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:18:16.280814   29935 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:18:16.282027   29935 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:18:16.282048   29935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 19:18:16.282065   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:18:16.284679   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:18:16.285076   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:18:16.285105   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:18:16.285270   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:18:16.285440   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:18:16.285592   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:18:16.285742   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:18:16.293679   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39469
	I0829 19:18:16.294171   29935 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:18:16.294714   29935 main.go:141] libmachine: Using API Version  1
	I0829 19:18:16.294735   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:18:16.295020   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:18:16.295185   29935 main.go:141] libmachine: (ha-505269) Calling .GetState
	I0829 19:18:16.296542   29935 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:18:16.296752   29935 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 19:18:16.296769   29935 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 19:18:16.296785   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:18:16.299603   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:18:16.299974   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:18:16.299999   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:18:16.300137   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:18:16.300302   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:18:16.300471   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:18:16.300597   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:18:16.365131   29935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0829 19:18:16.401954   29935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:18:16.457570   29935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 19:18:16.847271   29935 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0829 19:18:17.135374   29935 main.go:141] libmachine: Making call to close driver server
	I0829 19:18:17.135397   29935 main.go:141] libmachine: (ha-505269) Calling .Close
	I0829 19:18:17.135420   29935 main.go:141] libmachine: Making call to close driver server
	I0829 19:18:17.135441   29935 main.go:141] libmachine: (ha-505269) Calling .Close
	I0829 19:18:17.135692   29935 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:18:17.135704   29935 main.go:141] libmachine: (ha-505269) DBG | Closing plugin on server side
	I0829 19:18:17.135708   29935 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:18:17.135719   29935 main.go:141] libmachine: Making call to close driver server
	I0829 19:18:17.135725   29935 main.go:141] libmachine: (ha-505269) Calling .Close
	I0829 19:18:17.135741   29935 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:18:17.135750   29935 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:18:17.135761   29935 main.go:141] libmachine: Making call to close driver server
	I0829 19:18:17.135769   29935 main.go:141] libmachine: (ha-505269) Calling .Close
	I0829 19:18:17.136026   29935 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:18:17.136039   29935 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:18:17.136069   29935 main.go:141] libmachine: (ha-505269) DBG | Closing plugin on server side
	I0829 19:18:17.136108   29935 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:18:17.136115   29935 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:18:17.136176   29935 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0829 19:18:17.136195   29935 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0829 19:18:17.136295   29935 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0829 19:18:17.136307   29935 round_trippers.go:469] Request Headers:
	I0829 19:18:17.136319   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:18:17.136327   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:18:17.149354   29935 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0829 19:18:17.150047   29935 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0829 19:18:17.150060   29935 round_trippers.go:469] Request Headers:
	I0829 19:18:17.150067   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:18:17.150070   29935 round_trippers.go:473]     Content-Type: application/json
	I0829 19:18:17.150075   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:18:17.154072   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:18:17.154260   29935 main.go:141] libmachine: Making call to close driver server
	I0829 19:18:17.154274   29935 main.go:141] libmachine: (ha-505269) Calling .Close
	I0829 19:18:17.154565   29935 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:18:17.154595   29935 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:18:17.154571   29935 main.go:141] libmachine: (ha-505269) DBG | Closing plugin on server side
	I0829 19:18:17.156214   29935 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0829 19:18:17.157534   29935 addons.go:510] duration metric: took 916.52286ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0829 19:18:17.157564   29935 start.go:246] waiting for cluster config update ...
	I0829 19:18:17.157575   29935 start.go:255] writing updated cluster config ...
	I0829 19:18:17.159049   29935 out.go:201] 
	I0829 19:18:17.160300   29935 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:18:17.160364   29935 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/config.json ...
	I0829 19:18:17.161879   29935 out.go:177] * Starting "ha-505269-m02" control-plane node in "ha-505269" cluster
	I0829 19:18:17.163085   29935 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:18:17.163108   29935 cache.go:56] Caching tarball of preloaded images
	I0829 19:18:17.163181   29935 preload.go:172] Found /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 19:18:17.163192   29935 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 19:18:17.163254   29935 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/config.json ...
	I0829 19:18:17.163418   29935 start.go:360] acquireMachinesLock for ha-505269-m02: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:18:17.163456   29935 start.go:364] duration metric: took 20.467µs to acquireMachinesLock for "ha-505269-m02"
	I0829 19:18:17.163471   29935 start.go:93] Provisioning new machine with config: &{Name:ha-505269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-505269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:18:17.163543   29935 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0829 19:18:17.165179   29935 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 19:18:17.165244   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:18:17.165265   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:18:17.179457   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40851
	I0829 19:18:17.179807   29935 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:18:17.180265   29935 main.go:141] libmachine: Using API Version  1
	I0829 19:18:17.180281   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:18:17.180573   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:18:17.180779   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetMachineName
	I0829 19:18:17.180943   29935 main.go:141] libmachine: (ha-505269-m02) Calling .DriverName
	I0829 19:18:17.181129   29935 start.go:159] libmachine.API.Create for "ha-505269" (driver="kvm2")
	I0829 19:18:17.181188   29935 client.go:168] LocalClient.Create starting
	I0829 19:18:17.181215   29935 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem
	I0829 19:18:17.181243   29935 main.go:141] libmachine: Decoding PEM data...
	I0829 19:18:17.181256   29935 main.go:141] libmachine: Parsing certificate...
	I0829 19:18:17.181308   29935 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem
	I0829 19:18:17.181334   29935 main.go:141] libmachine: Decoding PEM data...
	I0829 19:18:17.181355   29935 main.go:141] libmachine: Parsing certificate...
	I0829 19:18:17.181381   29935 main.go:141] libmachine: Running pre-create checks...
	I0829 19:18:17.181392   29935 main.go:141] libmachine: (ha-505269-m02) Calling .PreCreateCheck
	I0829 19:18:17.181557   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetConfigRaw
	I0829 19:18:17.181919   29935 main.go:141] libmachine: Creating machine...
	I0829 19:18:17.181938   29935 main.go:141] libmachine: (ha-505269-m02) Calling .Create
	I0829 19:18:17.182049   29935 main.go:141] libmachine: (ha-505269-m02) Creating KVM machine...
	I0829 19:18:17.183313   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found existing default KVM network
	I0829 19:18:17.183432   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found existing private KVM network mk-ha-505269
	I0829 19:18:17.183562   29935 main.go:141] libmachine: (ha-505269-m02) Setting up store path in /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02 ...
	I0829 19:18:17.183581   29935 main.go:141] libmachine: (ha-505269-m02) Building disk image from file:///home/jenkins/minikube-integration/19530-11185/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso
	I0829 19:18:17.183637   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:17.183533   30766 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 19:18:17.183731   29935 main.go:141] libmachine: (ha-505269-m02) Downloading /home/jenkins/minikube-integration/19530-11185/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19530-11185/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso...
	I0829 19:18:17.414248   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:17.414122   30766 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/id_rsa...
	I0829 19:18:17.506783   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:17.506675   30766 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/ha-505269-m02.rawdisk...
	I0829 19:18:17.506820   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Writing magic tar header
	I0829 19:18:17.506860   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Writing SSH key tar header
	I0829 19:18:17.506877   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:17.506807   30766 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02 ...
	I0829 19:18:17.506966   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02
	I0829 19:18:17.507001   29935 main.go:141] libmachine: (ha-505269-m02) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02 (perms=drwx------)
	I0829 19:18:17.507018   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube/machines
	I0829 19:18:17.507036   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 19:18:17.507054   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185
	I0829 19:18:17.507070   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0829 19:18:17.507088   29935 main.go:141] libmachine: (ha-505269-m02) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube/machines (perms=drwxr-xr-x)
	I0829 19:18:17.507100   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Checking permissions on dir: /home/jenkins
	I0829 19:18:17.507114   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Checking permissions on dir: /home
	I0829 19:18:17.507124   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Skipping /home - not owner
	I0829 19:18:17.507137   29935 main.go:141] libmachine: (ha-505269-m02) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube (perms=drwxr-xr-x)
	I0829 19:18:17.507146   29935 main.go:141] libmachine: (ha-505269-m02) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185 (perms=drwxrwxr-x)
	I0829 19:18:17.507162   29935 main.go:141] libmachine: (ha-505269-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0829 19:18:17.507173   29935 main.go:141] libmachine: (ha-505269-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0829 19:18:17.507218   29935 main.go:141] libmachine: (ha-505269-m02) Creating domain...
	I0829 19:18:17.508099   29935 main.go:141] libmachine: (ha-505269-m02) define libvirt domain using xml: 
	I0829 19:18:17.508111   29935 main.go:141] libmachine: (ha-505269-m02) <domain type='kvm'>
	I0829 19:18:17.508118   29935 main.go:141] libmachine: (ha-505269-m02)   <name>ha-505269-m02</name>
	I0829 19:18:17.508122   29935 main.go:141] libmachine: (ha-505269-m02)   <memory unit='MiB'>2200</memory>
	I0829 19:18:17.508128   29935 main.go:141] libmachine: (ha-505269-m02)   <vcpu>2</vcpu>
	I0829 19:18:17.508132   29935 main.go:141] libmachine: (ha-505269-m02)   <features>
	I0829 19:18:17.508137   29935 main.go:141] libmachine: (ha-505269-m02)     <acpi/>
	I0829 19:18:17.508142   29935 main.go:141] libmachine: (ha-505269-m02)     <apic/>
	I0829 19:18:17.508146   29935 main.go:141] libmachine: (ha-505269-m02)     <pae/>
	I0829 19:18:17.508151   29935 main.go:141] libmachine: (ha-505269-m02)     
	I0829 19:18:17.508157   29935 main.go:141] libmachine: (ha-505269-m02)   </features>
	I0829 19:18:17.508161   29935 main.go:141] libmachine: (ha-505269-m02)   <cpu mode='host-passthrough'>
	I0829 19:18:17.508166   29935 main.go:141] libmachine: (ha-505269-m02)   
	I0829 19:18:17.508170   29935 main.go:141] libmachine: (ha-505269-m02)   </cpu>
	I0829 19:18:17.508175   29935 main.go:141] libmachine: (ha-505269-m02)   <os>
	I0829 19:18:17.508183   29935 main.go:141] libmachine: (ha-505269-m02)     <type>hvm</type>
	I0829 19:18:17.508188   29935 main.go:141] libmachine: (ha-505269-m02)     <boot dev='cdrom'/>
	I0829 19:18:17.508201   29935 main.go:141] libmachine: (ha-505269-m02)     <boot dev='hd'/>
	I0829 19:18:17.508211   29935 main.go:141] libmachine: (ha-505269-m02)     <bootmenu enable='no'/>
	I0829 19:18:17.508230   29935 main.go:141] libmachine: (ha-505269-m02)   </os>
	I0829 19:18:17.508239   29935 main.go:141] libmachine: (ha-505269-m02)   <devices>
	I0829 19:18:17.508253   29935 main.go:141] libmachine: (ha-505269-m02)     <disk type='file' device='cdrom'>
	I0829 19:18:17.508270   29935 main.go:141] libmachine: (ha-505269-m02)       <source file='/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/boot2docker.iso'/>
	I0829 19:18:17.508278   29935 main.go:141] libmachine: (ha-505269-m02)       <target dev='hdc' bus='scsi'/>
	I0829 19:18:17.508284   29935 main.go:141] libmachine: (ha-505269-m02)       <readonly/>
	I0829 19:18:17.508291   29935 main.go:141] libmachine: (ha-505269-m02)     </disk>
	I0829 19:18:17.508298   29935 main.go:141] libmachine: (ha-505269-m02)     <disk type='file' device='disk'>
	I0829 19:18:17.508317   29935 main.go:141] libmachine: (ha-505269-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0829 19:18:17.508345   29935 main.go:141] libmachine: (ha-505269-m02)       <source file='/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/ha-505269-m02.rawdisk'/>
	I0829 19:18:17.508368   29935 main.go:141] libmachine: (ha-505269-m02)       <target dev='hda' bus='virtio'/>
	I0829 19:18:17.508377   29935 main.go:141] libmachine: (ha-505269-m02)     </disk>
	I0829 19:18:17.508394   29935 main.go:141] libmachine: (ha-505269-m02)     <interface type='network'>
	I0829 19:18:17.508408   29935 main.go:141] libmachine: (ha-505269-m02)       <source network='mk-ha-505269'/>
	I0829 19:18:17.508418   29935 main.go:141] libmachine: (ha-505269-m02)       <model type='virtio'/>
	I0829 19:18:17.508429   29935 main.go:141] libmachine: (ha-505269-m02)     </interface>
	I0829 19:18:17.508443   29935 main.go:141] libmachine: (ha-505269-m02)     <interface type='network'>
	I0829 19:18:17.508459   29935 main.go:141] libmachine: (ha-505269-m02)       <source network='default'/>
	I0829 19:18:17.508469   29935 main.go:141] libmachine: (ha-505269-m02)       <model type='virtio'/>
	I0829 19:18:17.508477   29935 main.go:141] libmachine: (ha-505269-m02)     </interface>
	I0829 19:18:17.508488   29935 main.go:141] libmachine: (ha-505269-m02)     <serial type='pty'>
	I0829 19:18:17.508500   29935 main.go:141] libmachine: (ha-505269-m02)       <target port='0'/>
	I0829 19:18:17.508510   29935 main.go:141] libmachine: (ha-505269-m02)     </serial>
	I0829 19:18:17.508574   29935 main.go:141] libmachine: (ha-505269-m02)     <console type='pty'>
	I0829 19:18:17.508610   29935 main.go:141] libmachine: (ha-505269-m02)       <target type='serial' port='0'/>
	I0829 19:18:17.508630   29935 main.go:141] libmachine: (ha-505269-m02)     </console>
	I0829 19:18:17.508636   29935 main.go:141] libmachine: (ha-505269-m02)     <rng model='virtio'>
	I0829 19:18:17.508645   29935 main.go:141] libmachine: (ha-505269-m02)       <backend model='random'>/dev/random</backend>
	I0829 19:18:17.508651   29935 main.go:141] libmachine: (ha-505269-m02)     </rng>
	I0829 19:18:17.508656   29935 main.go:141] libmachine: (ha-505269-m02)     
	I0829 19:18:17.508664   29935 main.go:141] libmachine: (ha-505269-m02)     
	I0829 19:18:17.508669   29935 main.go:141] libmachine: (ha-505269-m02)   </devices>
	I0829 19:18:17.508680   29935 main.go:141] libmachine: (ha-505269-m02) </domain>
	I0829 19:18:17.508689   29935 main.go:141] libmachine: (ha-505269-m02) 
	I0829 19:18:17.515226   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:d3:7f:c5 in network default
	I0829 19:18:17.515807   29935 main.go:141] libmachine: (ha-505269-m02) Ensuring networks are active...
	I0829 19:18:17.515840   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:17.516459   29935 main.go:141] libmachine: (ha-505269-m02) Ensuring network default is active
	I0829 19:18:17.516883   29935 main.go:141] libmachine: (ha-505269-m02) Ensuring network mk-ha-505269 is active
	I0829 19:18:17.517209   29935 main.go:141] libmachine: (ha-505269-m02) Getting domain xml...
	I0829 19:18:17.518082   29935 main.go:141] libmachine: (ha-505269-m02) Creating domain...
	I0829 19:18:18.727292   29935 main.go:141] libmachine: (ha-505269-m02) Waiting to get IP...
	I0829 19:18:18.728041   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:18.728417   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find current IP address of domain ha-505269-m02 in network mk-ha-505269
	I0829 19:18:18.728459   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:18.728405   30766 retry.go:31] will retry after 257.317268ms: waiting for machine to come up
	I0829 19:18:18.986959   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:18.987385   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find current IP address of domain ha-505269-m02 in network mk-ha-505269
	I0829 19:18:18.987411   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:18.987357   30766 retry.go:31] will retry after 254.624589ms: waiting for machine to come up
	I0829 19:18:19.243886   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:19.244406   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find current IP address of domain ha-505269-m02 in network mk-ha-505269
	I0829 19:18:19.244432   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:19.244326   30766 retry.go:31] will retry after 465.137393ms: waiting for machine to come up
	I0829 19:18:19.710980   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:19.711406   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find current IP address of domain ha-505269-m02 in network mk-ha-505269
	I0829 19:18:19.711434   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:19.711357   30766 retry.go:31] will retry after 421.01646ms: waiting for machine to come up
	I0829 19:18:20.133506   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:20.133931   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find current IP address of domain ha-505269-m02 in network mk-ha-505269
	I0829 19:18:20.133954   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:20.133896   30766 retry.go:31] will retry after 665.095868ms: waiting for machine to come up
	I0829 19:18:20.800645   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:20.801073   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find current IP address of domain ha-505269-m02 in network mk-ha-505269
	I0829 19:18:20.801103   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:20.801016   30766 retry.go:31] will retry after 771.303274ms: waiting for machine to come up
	I0829 19:18:21.573835   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:21.574225   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find current IP address of domain ha-505269-m02 in network mk-ha-505269
	I0829 19:18:21.574245   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:21.574188   30766 retry.go:31] will retry after 1.037740689s: waiting for machine to come up
	I0829 19:18:22.613724   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:22.614106   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find current IP address of domain ha-505269-m02 in network mk-ha-505269
	I0829 19:18:22.614128   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:22.614066   30766 retry.go:31] will retry after 1.332280696s: waiting for machine to come up
	I0829 19:18:23.947614   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:23.948022   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find current IP address of domain ha-505269-m02 in network mk-ha-505269
	I0829 19:18:23.948049   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:23.947980   30766 retry.go:31] will retry after 1.862236314s: waiting for machine to come up
	I0829 19:18:25.812946   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:25.813370   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find current IP address of domain ha-505269-m02 in network mk-ha-505269
	I0829 19:18:25.813391   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:25.813327   30766 retry.go:31] will retry after 1.70488661s: waiting for machine to come up
	I0829 19:18:27.520272   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:27.520750   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find current IP address of domain ha-505269-m02 in network mk-ha-505269
	I0829 19:18:27.520777   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:27.520712   30766 retry.go:31] will retry after 1.968849341s: waiting for machine to come up
	I0829 19:18:29.491671   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:29.492113   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find current IP address of domain ha-505269-m02 in network mk-ha-505269
	I0829 19:18:29.492135   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:29.492078   30766 retry.go:31] will retry after 3.419516708s: waiting for machine to come up
	I0829 19:18:32.913606   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:32.914076   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find current IP address of domain ha-505269-m02 in network mk-ha-505269
	I0829 19:18:32.914102   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:32.914032   30766 retry.go:31] will retry after 3.557791272s: waiting for machine to come up
	I0829 19:18:36.475527   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:36.475977   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find current IP address of domain ha-505269-m02 in network mk-ha-505269
	I0829 19:18:36.475995   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:36.475950   30766 retry.go:31] will retry after 5.363647101s: waiting for machine to come up
	I0829 19:18:41.844946   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:41.845429   29935 main.go:141] libmachine: (ha-505269-m02) Found IP for machine: 192.168.39.68
	I0829 19:18:41.845445   29935 main.go:141] libmachine: (ha-505269-m02) Reserving static IP address...
	I0829 19:18:41.845454   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has current primary IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:41.845751   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find host DHCP lease matching {name: "ha-505269-m02", mac: "52:54:00:8f:ef:8c", ip: "192.168.39.68"} in network mk-ha-505269
	I0829 19:18:41.914922   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Getting to WaitForSSH function...
	I0829 19:18:41.914952   29935 main.go:141] libmachine: (ha-505269-m02) Reserved static IP address: 192.168.39.68
	I0829 19:18:41.914966   29935 main.go:141] libmachine: (ha-505269-m02) Waiting for SSH to be available...
	I0829 19:18:41.917290   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:41.917533   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269
	I0829 19:18:41.917567   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find defined IP address of network mk-ha-505269 interface with MAC address 52:54:00:8f:ef:8c
	I0829 19:18:41.917682   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Using SSH client type: external
	I0829 19:18:41.917710   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/id_rsa (-rw-------)
	I0829 19:18:41.917745   29935 main.go:141] libmachine: (ha-505269-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:18:41.917773   29935 main.go:141] libmachine: (ha-505269-m02) DBG | About to run SSH command:
	I0829 19:18:41.917787   29935 main.go:141] libmachine: (ha-505269-m02) DBG | exit 0
	I0829 19:18:41.921232   29935 main.go:141] libmachine: (ha-505269-m02) DBG | SSH cmd err, output: exit status 255: 
	I0829 19:18:41.921246   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0829 19:18:41.921253   29935 main.go:141] libmachine: (ha-505269-m02) DBG | command : exit 0
	I0829 19:18:41.921278   29935 main.go:141] libmachine: (ha-505269-m02) DBG | err     : exit status 255
	I0829 19:18:41.921286   29935 main.go:141] libmachine: (ha-505269-m02) DBG | output  : 
	I0829 19:18:44.922856   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Getting to WaitForSSH function...
	I0829 19:18:44.925424   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:44.925805   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:44.925830   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:44.925881   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Using SSH client type: external
	I0829 19:18:44.925896   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/id_rsa (-rw-------)
	I0829 19:18:44.925943   29935 main.go:141] libmachine: (ha-505269-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.68 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:18:44.925963   29935 main.go:141] libmachine: (ha-505269-m02) DBG | About to run SSH command:
	I0829 19:18:44.925971   29935 main.go:141] libmachine: (ha-505269-m02) DBG | exit 0
	I0829 19:18:45.050615   29935 main.go:141] libmachine: (ha-505269-m02) DBG | SSH cmd err, output: <nil>: 
	I0829 19:18:45.050899   29935 main.go:141] libmachine: (ha-505269-m02) KVM machine creation complete!
	I0829 19:18:45.051366   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetConfigRaw
	I0829 19:18:45.051902   29935 main.go:141] libmachine: (ha-505269-m02) Calling .DriverName
	I0829 19:18:45.052089   29935 main.go:141] libmachine: (ha-505269-m02) Calling .DriverName
	I0829 19:18:45.052233   29935 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0829 19:18:45.052246   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetState
	I0829 19:18:45.053465   29935 main.go:141] libmachine: Detecting operating system of created instance...
	I0829 19:18:45.053485   29935 main.go:141] libmachine: Waiting for SSH to be available...
	I0829 19:18:45.053498   29935 main.go:141] libmachine: Getting to WaitForSSH function...
	I0829 19:18:45.053506   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:18:45.055414   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.055689   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:45.055717   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.055861   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:18:45.056036   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:45.056175   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:45.056313   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:18:45.056473   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:18:45.056766   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0829 19:18:45.056784   29935 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0829 19:18:45.157985   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:18:45.158007   29935 main.go:141] libmachine: Detecting the provisioner...
	I0829 19:18:45.158013   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:18:45.160876   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.161252   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:45.161280   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.161430   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:18:45.161612   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:45.161764   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:45.161907   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:18:45.162032   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:18:45.162183   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0829 19:18:45.162193   29935 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0829 19:18:45.263343   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0829 19:18:45.263407   29935 main.go:141] libmachine: found compatible host: buildroot
	I0829 19:18:45.263418   29935 main.go:141] libmachine: Provisioning with buildroot...
	I0829 19:18:45.263429   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetMachineName
	I0829 19:18:45.263676   29935 buildroot.go:166] provisioning hostname "ha-505269-m02"
	I0829 19:18:45.263694   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetMachineName
	I0829 19:18:45.263810   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:18:45.266331   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.266705   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:45.266733   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.266874   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:18:45.267050   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:45.267193   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:45.267339   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:18:45.267601   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:18:45.268263   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0829 19:18:45.268292   29935 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-505269-m02 && echo "ha-505269-m02" | sudo tee /etc/hostname
	I0829 19:18:45.385767   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-505269-m02
	
	I0829 19:18:45.385796   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:18:45.388371   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.388707   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:45.388731   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.388910   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:18:45.389092   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:45.389215   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:45.389347   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:18:45.389492   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:18:45.389700   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0829 19:18:45.389717   29935 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-505269-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-505269-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-505269-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:18:45.499244   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:18:45.499269   29935 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 19:18:45.499283   29935 buildroot.go:174] setting up certificates
	I0829 19:18:45.499292   29935 provision.go:84] configureAuth start
	I0829 19:18:45.499299   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetMachineName
	I0829 19:18:45.499545   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetIP
	I0829 19:18:45.502213   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.502584   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:45.502613   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.502734   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:18:45.505024   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.505377   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:45.505406   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.505526   29935 provision.go:143] copyHostCerts
	I0829 19:18:45.505558   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 19:18:45.505591   29935 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 19:18:45.505600   29935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 19:18:45.505676   29935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 19:18:45.505757   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 19:18:45.505775   29935 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 19:18:45.505782   29935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 19:18:45.505806   29935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 19:18:45.505861   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 19:18:45.505878   29935 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 19:18:45.505885   29935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 19:18:45.505907   29935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 19:18:45.505963   29935 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.ha-505269-m02 san=[127.0.0.1 192.168.39.68 ha-505269-m02 localhost minikube]
	I0829 19:18:45.659845   29935 provision.go:177] copyRemoteCerts
	I0829 19:18:45.659894   29935 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:18:45.659916   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:18:45.662522   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.662891   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:45.662919   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.663073   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:18:45.663266   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:45.663415   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:18:45.663521   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/id_rsa Username:docker}
	I0829 19:18:45.744501   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0829 19:18:45.744568   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 19:18:45.768821   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0829 19:18:45.768885   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0829 19:18:45.792457   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0829 19:18:45.792524   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 19:18:45.816617   29935 provision.go:87] duration metric: took 317.314234ms to configureAuth
	I0829 19:18:45.816644   29935 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:18:45.816837   29935 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:18:45.816925   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:18:45.819505   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.819990   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:45.820014   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.820120   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:18:45.820291   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:45.820435   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:45.820563   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:18:45.820726   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:18:45.820922   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0829 19:18:45.820945   29935 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:18:46.040607   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:18:46.040633   29935 main.go:141] libmachine: Checking connection to Docker...
	I0829 19:18:46.040641   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetURL
	I0829 19:18:46.041890   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Using libvirt version 6000000
	I0829 19:18:46.044049   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:46.044409   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:46.044435   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:46.044632   29935 main.go:141] libmachine: Docker is up and running!
	I0829 19:18:46.044656   29935 main.go:141] libmachine: Reticulating splines...
	I0829 19:18:46.044664   29935 client.go:171] duration metric: took 28.863466105s to LocalClient.Create
	I0829 19:18:46.044683   29935 start.go:167] duration metric: took 28.863557501s to libmachine.API.Create "ha-505269"
	I0829 19:18:46.044698   29935 start.go:293] postStartSetup for "ha-505269-m02" (driver="kvm2")
	I0829 19:18:46.044709   29935 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:18:46.044733   29935 main.go:141] libmachine: (ha-505269-m02) Calling .DriverName
	I0829 19:18:46.044966   29935 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:18:46.044986   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:18:46.047304   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:46.047633   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:46.047656   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:46.047794   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:18:46.047983   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:46.048150   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:18:46.048274   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/id_rsa Username:docker}
	I0829 19:18:46.129008   29935 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:18:46.133154   29935 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:18:46.133176   29935 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 19:18:46.133235   29935 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 19:18:46.133355   29935 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 19:18:46.133369   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> /etc/ssl/certs/183612.pem
	I0829 19:18:46.133476   29935 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:18:46.143092   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 19:18:46.166129   29935 start.go:296] duration metric: took 121.41997ms for postStartSetup
	I0829 19:18:46.166172   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetConfigRaw
	I0829 19:18:46.166736   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetIP
	I0829 19:18:46.169520   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:46.169856   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:46.169886   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:46.170095   29935 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/config.json ...
	I0829 19:18:46.170341   29935 start.go:128] duration metric: took 29.006780061s to createHost
	I0829 19:18:46.170364   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:18:46.172864   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:46.173186   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:46.173206   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:46.173331   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:18:46.173504   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:46.173650   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:46.173882   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:18:46.174042   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:18:46.174192   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0829 19:18:46.174203   29935 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:18:46.275422   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724959126.256599612
	
	I0829 19:18:46.275447   29935 fix.go:216] guest clock: 1724959126.256599612
	I0829 19:18:46.275457   29935 fix.go:229] Guest: 2024-08-29 19:18:46.256599612 +0000 UTC Remote: 2024-08-29 19:18:46.170353909 +0000 UTC m=+78.244806111 (delta=86.245703ms)
	I0829 19:18:46.275474   29935 fix.go:200] guest clock delta is within tolerance: 86.245703ms
	I0829 19:18:46.275479   29935 start.go:83] releasing machines lock for "ha-505269-m02", held for 29.112015583s
	I0829 19:18:46.275496   29935 main.go:141] libmachine: (ha-505269-m02) Calling .DriverName
	I0829 19:18:46.275720   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetIP
	I0829 19:18:46.278387   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:46.278696   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:46.278738   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:46.281118   29935 out.go:177] * Found network options:
	I0829 19:18:46.282408   29935 out.go:177]   - NO_PROXY=192.168.39.56
	W0829 19:18:46.283673   29935 proxy.go:119] fail to check proxy env: Error ip not in block
	I0829 19:18:46.283699   29935 main.go:141] libmachine: (ha-505269-m02) Calling .DriverName
	I0829 19:18:46.284172   29935 main.go:141] libmachine: (ha-505269-m02) Calling .DriverName
	I0829 19:18:46.284346   29935 main.go:141] libmachine: (ha-505269-m02) Calling .DriverName
	I0829 19:18:46.284422   29935 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:18:46.284467   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	W0829 19:18:46.284533   29935 proxy.go:119] fail to check proxy env: Error ip not in block
	I0829 19:18:46.284621   29935 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:18:46.284644   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:18:46.287165   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:46.287480   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:46.287507   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:46.287565   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:46.287640   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:18:46.287795   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:46.287948   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:18:46.287973   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:46.287995   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:46.288138   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/id_rsa Username:docker}
	I0829 19:18:46.288151   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:18:46.288283   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:46.288401   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:18:46.288521   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/id_rsa Username:docker}
	I0829 19:18:46.522694   29935 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:18:46.528951   29935 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:18:46.529018   29935 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:18:46.545655   29935 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:18:46.545682   29935 start.go:495] detecting cgroup driver to use...
	I0829 19:18:46.545755   29935 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:18:46.563666   29935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:18:46.578069   29935 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:18:46.578140   29935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:18:46.591691   29935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:18:46.605288   29935 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:18:46.721052   29935 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:18:46.879427   29935 docker.go:233] disabling docker service ...
	I0829 19:18:46.879503   29935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:18:46.892990   29935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:18:46.905147   29935 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:18:47.024064   29935 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:18:47.135402   29935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:18:47.150718   29935 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:18:47.171454   29935 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:18:47.171520   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:18:47.182255   29935 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:18:47.182314   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:18:47.193567   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:18:47.204104   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:18:47.214516   29935 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:18:47.225095   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:18:47.235245   29935 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:18:47.252165   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:18:47.262133   29935 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:18:47.271342   29935 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:18:47.271394   29935 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:18:47.284521   29935 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:18:47.293856   29935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:18:47.413333   29935 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:18:47.507972   29935 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:18:47.508048   29935 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:18:47.513387   29935 start.go:563] Will wait 60s for crictl version
	I0829 19:18:47.513436   29935 ssh_runner.go:195] Run: which crictl
	I0829 19:18:47.517331   29935 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:18:47.556504   29935 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:18:47.556589   29935 ssh_runner.go:195] Run: crio --version
	I0829 19:18:47.583593   29935 ssh_runner.go:195] Run: crio --version
	I0829 19:18:47.611532   29935 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:18:47.612939   29935 out.go:177]   - env NO_PROXY=192.168.39.56
	I0829 19:18:47.614130   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetIP
	I0829 19:18:47.616737   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:47.617064   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:47.617090   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:47.617248   29935 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 19:18:47.621436   29935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:18:47.634030   29935 mustload.go:65] Loading cluster: ha-505269
	I0829 19:18:47.634197   29935 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:18:47.634429   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:18:47.634463   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:18:47.648704   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46269
	I0829 19:18:47.649095   29935 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:18:47.649539   29935 main.go:141] libmachine: Using API Version  1
	I0829 19:18:47.649559   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:18:47.649846   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:18:47.650017   29935 main.go:141] libmachine: (ha-505269) Calling .GetState
	I0829 19:18:47.651451   29935 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:18:47.651738   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:18:47.651769   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:18:47.665692   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33097
	I0829 19:18:47.666066   29935 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:18:47.666478   29935 main.go:141] libmachine: Using API Version  1
	I0829 19:18:47.666500   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:18:47.666814   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:18:47.667003   29935 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:18:47.667151   29935 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269 for IP: 192.168.39.68
	I0829 19:18:47.667161   29935 certs.go:194] generating shared ca certs ...
	I0829 19:18:47.667174   29935 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:18:47.667290   29935 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 19:18:47.667326   29935 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 19:18:47.667336   29935 certs.go:256] generating profile certs ...
	I0829 19:18:47.667400   29935 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.key
	I0829 19:18:47.667424   29935 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.fa582b21
	I0829 19:18:47.667437   29935 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.fa582b21 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.56 192.168.39.68 192.168.39.254]
	I0829 19:18:47.779724   29935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.fa582b21 ...
	I0829 19:18:47.779750   29935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.fa582b21: {Name:mk2f08942257e15c7321f8b69b6f00a9a29cc1ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:18:47.779938   29935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.fa582b21 ...
	I0829 19:18:47.779955   29935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.fa582b21: {Name:mk67cb8bda9cb7e09ef73f9ddd0a839032dff9ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:18:47.780057   29935 certs.go:381] copying /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.fa582b21 -> /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt
	I0829 19:18:47.780184   29935 certs.go:385] copying /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.fa582b21 -> /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key
	I0829 19:18:47.780306   29935 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.key
	I0829 19:18:47.780320   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0829 19:18:47.780332   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0829 19:18:47.780343   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0829 19:18:47.780357   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0829 19:18:47.780367   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0829 19:18:47.780380   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0829 19:18:47.780390   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0829 19:18:47.780402   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0829 19:18:47.780446   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 19:18:47.780472   29935 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 19:18:47.780482   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 19:18:47.780504   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 19:18:47.780526   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:18:47.780547   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 19:18:47.780582   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 19:18:47.780607   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> /usr/share/ca-certificates/183612.pem
	I0829 19:18:47.780621   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:18:47.780633   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem -> /usr/share/ca-certificates/18361.pem
	I0829 19:18:47.780664   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:18:47.783577   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:18:47.783987   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:18:47.784014   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:18:47.784163   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:18:47.784380   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:18:47.784502   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:18:47.784657   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:18:47.862835   29935 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0829 19:18:47.868350   29935 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0829 19:18:47.879589   29935 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0829 19:18:47.883909   29935 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0829 19:18:47.893791   29935 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0829 19:18:47.897869   29935 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0829 19:18:47.907713   29935 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0829 19:18:47.911588   29935 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0829 19:18:47.921455   29935 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0829 19:18:47.925543   29935 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0829 19:18:47.935225   29935 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0829 19:18:47.939147   29935 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0829 19:18:47.948762   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:18:47.973508   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 19:18:47.996256   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:18:48.020027   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:18:48.043059   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0829 19:18:48.066056   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:18:48.089033   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:18:48.117817   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:18:48.141556   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 19:18:48.164685   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:18:48.187993   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 19:18:48.212550   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0829 19:18:48.231237   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0829 19:18:48.248744   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0829 19:18:48.266514   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0829 19:18:48.284259   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0829 19:18:48.302024   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0829 19:18:48.319866   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0829 19:18:48.338258   29935 ssh_runner.go:195] Run: openssl version
	I0829 19:18:48.344326   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:18:48.356785   29935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:18:48.361601   29935 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:18:48.361665   29935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:18:48.367593   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:18:48.378547   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 19:18:48.389272   29935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 19:18:48.393770   29935 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 19:18:48.393815   29935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 19:18:48.401273   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 19:18:48.412981   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 19:18:48.424575   29935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 19:18:48.428932   29935 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 19:18:48.428973   29935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 19:18:48.434621   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:18:48.445294   29935 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:18:48.449364   29935 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 19:18:48.449413   29935 kubeadm.go:934] updating node {m02 192.168.39.68 8443 v1.31.0 crio true true} ...
	I0829 19:18:48.449506   29935 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-505269-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-505269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:18:48.449531   29935 kube-vip.go:115] generating kube-vip config ...
	I0829 19:18:48.449567   29935 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0829 19:18:48.466847   29935 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0829 19:18:48.466897   29935 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0829 19:18:48.466950   29935 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:18:48.477209   29935 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0829 19:18:48.477270   29935 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0829 19:18:48.487325   29935 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0829 19:18:48.487357   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0829 19:18:48.487381   29935 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19530-11185/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0829 19:18:48.487427   29935 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19530-11185/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0829 19:18:48.487442   29935 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0829 19:18:48.491913   29935 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0829 19:18:48.491934   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0829 19:18:49.305833   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0829 19:18:49.305931   29935 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0829 19:18:49.310975   29935 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0829 19:18:49.311006   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0829 19:18:49.333293   29935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:18:49.358083   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0829 19:18:49.358170   29935 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0829 19:18:49.368962   29935 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0829 19:18:49.369003   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0829 19:18:49.861662   29935 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0829 19:18:49.871762   29935 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0829 19:18:49.888780   29935 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:18:49.904605   29935 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0829 19:18:49.920339   29935 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0829 19:18:49.924279   29935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:18:49.935982   29935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:18:50.057650   29935 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:18:50.075230   29935 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:18:50.075718   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:18:50.075772   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:18:50.090220   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35719
	I0829 19:18:50.090630   29935 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:18:50.091089   29935 main.go:141] libmachine: Using API Version  1
	I0829 19:18:50.091110   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:18:50.091420   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:18:50.091638   29935 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:18:50.091792   29935 start.go:317] joinCluster: &{Name:ha-505269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-505269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:18:50.091913   29935 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0829 19:18:50.091937   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:18:50.095032   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:18:50.095451   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:18:50.095478   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:18:50.095622   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:18:50.095766   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:18:50.095939   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:18:50.096079   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:18:50.240803   29935 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:18:50.240856   29935 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token o9vyfs.13tw66289wnr77dl --discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-505269-m02 --control-plane --apiserver-advertise-address=192.168.39.68 --apiserver-bind-port=8443"
	I0829 19:19:11.793044   29935 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token o9vyfs.13tw66289wnr77dl --discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-505269-m02 --control-plane --apiserver-advertise-address=192.168.39.68 --apiserver-bind-port=8443": (21.552162288s)
	I0829 19:19:11.793081   29935 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0829 19:19:12.389852   29935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-505269-m02 minikube.k8s.io/updated_at=2024_08_29T19_19_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033 minikube.k8s.io/name=ha-505269 minikube.k8s.io/primary=false
	I0829 19:19:12.503770   29935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-505269-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0829 19:19:12.635366   29935 start.go:319] duration metric: took 22.543571284s to joinCluster
	I0829 19:19:12.635478   29935 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:19:12.635736   29935 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:19:12.637310   29935 out.go:177] * Verifying Kubernetes components...
	I0829 19:19:12.638479   29935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:19:12.927514   29935 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:19:12.982846   29935 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 19:19:12.983101   29935 kapi.go:59] client config for ha-505269: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.crt", KeyFile:"/home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.key", CAFile:"/home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0829 19:19:12.983162   29935 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.56:8443
	I0829 19:19:12.983379   29935 node_ready.go:35] waiting up to 6m0s for node "ha-505269-m02" to be "Ready" ...
	I0829 19:19:12.983497   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:12.983507   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:12.983518   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:12.983525   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:12.992919   29935 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0829 19:19:13.484291   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:13.484317   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:13.484328   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:13.484336   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:13.493976   29935 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0829 19:19:13.983790   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:13.983815   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:13.983826   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:13.983831   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:13.988882   29935 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0829 19:19:14.484067   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:14.484129   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:14.484149   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:14.484155   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:14.488475   29935 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 19:19:14.984566   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:14.984588   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:14.984599   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:14.984605   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:14.988346   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:14.989185   29935 node_ready.go:53] node "ha-505269-m02" has status "Ready":"False"
	I0829 19:19:15.483599   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:15.483623   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:15.483630   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:15.483635   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:15.487458   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:15.983970   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:15.983994   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:15.984005   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:15.984010   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:15.987359   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:16.484518   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:16.484547   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:16.484557   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:16.484562   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:16.488208   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:16.983957   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:16.983981   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:16.983989   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:16.984000   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:16.987155   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:17.484251   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:17.484278   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:17.484290   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:17.484297   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:17.487514   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:17.488266   29935 node_ready.go:53] node "ha-505269-m02" has status "Ready":"False"
	I0829 19:19:17.983838   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:17.983859   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:17.983866   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:17.983871   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:17.987448   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:18.484362   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:18.484382   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:18.484390   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:18.484395   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:18.487739   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:18.984627   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:18.984651   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:18.984662   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:18.984670   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:18.988075   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:19.483831   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:19.483853   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:19.483859   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:19.483862   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:19.491210   29935 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0829 19:19:19.491993   29935 node_ready.go:53] node "ha-505269-m02" has status "Ready":"False"
	I0829 19:19:19.984493   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:19.984517   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:19.984527   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:19.984533   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:19.988095   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:20.484478   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:20.484500   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:20.484508   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:20.484513   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:20.487975   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:20.984009   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:20.984038   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:20.984049   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:20.984057   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:20.987900   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:21.484630   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:21.484651   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:21.484662   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:21.484667   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:21.487892   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:21.983902   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:21.983923   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:21.983931   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:21.983936   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:21.986923   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:19:21.987597   29935 node_ready.go:53] node "ha-505269-m02" has status "Ready":"False"
	I0829 19:19:22.484420   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:22.484442   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:22.484449   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:22.484452   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:22.488097   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:22.984539   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:22.984564   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:22.984583   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:22.984588   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:23.021416   29935 round_trippers.go:574] Response Status: 200 OK in 36 milliseconds
	I0829 19:19:23.484247   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:23.484271   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:23.484280   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:23.484284   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:23.487446   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:23.984448   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:23.984471   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:23.984479   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:23.984482   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:23.987793   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:23.988463   29935 node_ready.go:53] node "ha-505269-m02" has status "Ready":"False"
	I0829 19:19:24.483764   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:24.483793   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:24.483804   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:24.483809   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:24.487081   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:24.983915   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:24.983940   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:24.983951   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:24.983959   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:24.987236   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:25.483697   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:25.483721   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:25.483732   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:25.483739   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:25.486975   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:25.983859   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:25.983881   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:25.983894   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:25.983900   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:25.987215   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:26.484055   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:26.484078   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:26.484086   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:26.484090   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:26.487410   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:26.487936   29935 node_ready.go:53] node "ha-505269-m02" has status "Ready":"False"
	I0829 19:19:26.984403   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:26.984429   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:26.984437   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:26.984441   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:26.987729   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:27.484140   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:27.484164   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:27.484172   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:27.484177   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:27.487346   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:27.984424   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:27.984444   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:27.984452   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:27.984457   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:27.987693   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:28.484284   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:28.484309   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:28.484317   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:28.484320   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:28.487409   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:28.984452   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:28.984478   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:28.984489   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:28.984495   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:28.987627   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:28.988318   29935 node_ready.go:53] node "ha-505269-m02" has status "Ready":"False"
	I0829 19:19:29.483632   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:29.483654   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:29.483663   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:29.483668   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:29.488370   29935 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 19:19:29.984454   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:29.984488   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:29.984497   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:29.984501   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:29.988205   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:30.483741   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:30.483770   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:30.483777   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:30.483780   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:30.487393   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:30.488144   29935 node_ready.go:49] node "ha-505269-m02" has status "Ready":"True"
	I0829 19:19:30.488158   29935 node_ready.go:38] duration metric: took 17.504748341s for node "ha-505269-m02" to be "Ready" ...
	I0829 19:19:30.488168   29935 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:19:30.488244   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods
	I0829 19:19:30.488255   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:30.488265   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:30.488272   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:30.493138   29935 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 19:19:30.499875   29935 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-bqqq5" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:30.499961   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-bqqq5
	I0829 19:19:30.499974   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:30.499983   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:30.499987   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:30.502951   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:19:30.503549   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:19:30.503564   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:30.503570   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:30.503574   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:30.508279   29935 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 19:19:30.508759   29935 pod_ready.go:93] pod "coredns-6f6b679f8f-bqqq5" in "kube-system" namespace has status "Ready":"True"
	I0829 19:19:30.508774   29935 pod_ready.go:82] duration metric: took 8.878644ms for pod "coredns-6f6b679f8f-bqqq5" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:30.508781   29935 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-qjgfg" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:30.508820   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qjgfg
	I0829 19:19:30.508828   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:30.508835   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:30.508838   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:30.515929   29935 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0829 19:19:30.516565   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:19:30.516586   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:30.516594   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:30.516602   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:30.522485   29935 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0829 19:19:30.522984   29935 pod_ready.go:93] pod "coredns-6f6b679f8f-qjgfg" in "kube-system" namespace has status "Ready":"True"
	I0829 19:19:30.523000   29935 pod_ready.go:82] duration metric: took 14.212396ms for pod "coredns-6f6b679f8f-qjgfg" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:30.523009   29935 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:30.523050   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/etcd-ha-505269
	I0829 19:19:30.523057   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:30.523063   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:30.523067   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:30.526363   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:30.526920   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:19:30.526932   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:30.526938   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:30.526942   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:30.530127   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:30.530679   29935 pod_ready.go:93] pod "etcd-ha-505269" in "kube-system" namespace has status "Ready":"True"
	I0829 19:19:30.530695   29935 pod_ready.go:82] duration metric: took 7.679883ms for pod "etcd-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:30.530703   29935 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:30.530751   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/etcd-ha-505269-m02
	I0829 19:19:30.530761   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:30.530770   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:30.530780   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:30.533168   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:19:30.533634   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:30.533647   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:30.533653   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:30.533659   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:30.535697   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:19:30.536288   29935 pod_ready.go:93] pod "etcd-ha-505269-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 19:19:30.536302   29935 pod_ready.go:82] duration metric: took 5.593438ms for pod "etcd-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:30.536319   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:30.684671   29935 request.go:632] Waited for 148.298173ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-505269
	I0829 19:19:30.684742   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-505269
	I0829 19:19:30.684747   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:30.684754   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:30.684760   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:30.690261   29935 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0829 19:19:30.883917   29935 request.go:632] Waited for 192.282533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:19:30.883962   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:19:30.883967   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:30.883974   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:30.883979   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:30.886862   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:19:30.887428   29935 pod_ready.go:93] pod "kube-apiserver-ha-505269" in "kube-system" namespace has status "Ready":"True"
	I0829 19:19:30.887445   29935 pod_ready.go:82] duration metric: took 351.117841ms for pod "kube-apiserver-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:30.887454   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:31.084614   29935 request.go:632] Waited for 197.107835ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-505269-m02
	I0829 19:19:31.084666   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-505269-m02
	I0829 19:19:31.084674   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:31.084684   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:31.084696   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:31.088180   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:31.284480   29935 request.go:632] Waited for 195.381883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:31.284527   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:31.284532   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:31.284539   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:31.284549   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:31.288024   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:31.288495   29935 pod_ready.go:93] pod "kube-apiserver-ha-505269-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 19:19:31.288514   29935 pod_ready.go:82] duration metric: took 401.053747ms for pod "kube-apiserver-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:31.288523   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:31.484606   29935 request.go:632] Waited for 196.009661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-505269
	I0829 19:19:31.484673   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-505269
	I0829 19:19:31.484680   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:31.484690   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:31.484695   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:31.488146   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:31.683934   29935 request.go:632] Waited for 195.278493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:19:31.683985   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:19:31.683990   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:31.684000   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:31.684003   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:31.687022   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:31.687701   29935 pod_ready.go:93] pod "kube-controller-manager-ha-505269" in "kube-system" namespace has status "Ready":"True"
	I0829 19:19:31.687722   29935 pod_ready.go:82] duration metric: took 399.189881ms for pod "kube-controller-manager-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:31.687731   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:31.884785   29935 request.go:632] Waited for 196.973338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-505269-m02
	I0829 19:19:31.884849   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-505269-m02
	I0829 19:19:31.884857   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:31.884868   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:31.884875   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:31.888378   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:32.084563   29935 request.go:632] Waited for 195.362829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:32.084622   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:32.084629   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:32.084637   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:32.084648   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:32.088093   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:32.088602   29935 pod_ready.go:93] pod "kube-controller-manager-ha-505269-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 19:19:32.088626   29935 pod_ready.go:82] duration metric: took 400.88696ms for pod "kube-controller-manager-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:32.088640   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hx822" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:32.284094   29935 request.go:632] Waited for 195.356676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hx822
	I0829 19:19:32.284150   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hx822
	I0829 19:19:32.284157   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:32.284168   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:32.284179   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:32.287149   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:19:32.484168   29935 request.go:632] Waited for 196.353084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:19:32.484222   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:19:32.484227   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:32.484241   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:32.484257   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:32.488123   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:32.488976   29935 pod_ready.go:93] pod "kube-proxy-hx822" in "kube-system" namespace has status "Ready":"True"
	I0829 19:19:32.488993   29935 pod_ready.go:82] duration metric: took 400.347039ms for pod "kube-proxy-hx822" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:32.489003   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jxbdt" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:32.683938   29935 request.go:632] Waited for 194.87159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jxbdt
	I0829 19:19:32.684002   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jxbdt
	I0829 19:19:32.684007   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:32.684015   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:32.684023   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:32.687308   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:32.884279   29935 request.go:632] Waited for 196.339846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:32.884354   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:32.884362   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:32.884372   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:32.884379   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:32.887930   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:32.888659   29935 pod_ready.go:93] pod "kube-proxy-jxbdt" in "kube-system" namespace has status "Ready":"True"
	I0829 19:19:32.888685   29935 pod_ready.go:82] duration metric: took 399.676636ms for pod "kube-proxy-jxbdt" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:32.888696   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:33.084207   29935 request.go:632] Waited for 195.432627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-505269
	I0829 19:19:33.084287   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-505269
	I0829 19:19:33.084295   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:33.084306   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:33.084317   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:33.087317   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:19:33.284337   29935 request.go:632] Waited for 196.351799ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:19:33.284392   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:19:33.284400   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:33.284408   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:33.284415   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:33.287740   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:33.288309   29935 pod_ready.go:93] pod "kube-scheduler-ha-505269" in "kube-system" namespace has status "Ready":"True"
	I0829 19:19:33.288324   29935 pod_ready.go:82] duration metric: took 399.621276ms for pod "kube-scheduler-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:33.288333   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:33.484379   29935 request.go:632] Waited for 195.988696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-505269-m02
	I0829 19:19:33.484451   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-505269-m02
	I0829 19:19:33.484457   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:33.484464   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:33.484468   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:33.487371   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:19:33.684465   29935 request.go:632] Waited for 196.405519ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:33.684545   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:33.684553   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:33.684563   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:33.684574   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:33.687743   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:33.688674   29935 pod_ready.go:93] pod "kube-scheduler-ha-505269-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 19:19:33.688690   29935 pod_ready.go:82] duration metric: took 400.349035ms for pod "kube-scheduler-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:33.688700   29935 pod_ready.go:39] duration metric: took 3.200517364s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:19:33.688713   29935 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:19:33.688765   29935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:19:33.706267   29935 api_server.go:72] duration metric: took 21.070749792s to wait for apiserver process to appear ...
	I0829 19:19:33.706290   29935 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:19:33.706306   29935 api_server.go:253] Checking apiserver healthz at https://192.168.39.56:8443/healthz ...
	I0829 19:19:33.713640   29935 api_server.go:279] https://192.168.39.56:8443/healthz returned 200:
	ok
	I0829 19:19:33.713698   29935 round_trippers.go:463] GET https://192.168.39.56:8443/version
	I0829 19:19:33.713703   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:33.713710   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:33.713715   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:33.714439   29935 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0829 19:19:33.714550   29935 api_server.go:141] control plane version: v1.31.0
	I0829 19:19:33.714566   29935 api_server.go:131] duration metric: took 8.270936ms to wait for apiserver health ...
	I0829 19:19:33.714612   29935 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:19:33.883953   29935 request.go:632] Waited for 169.270255ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods
	I0829 19:19:33.884014   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods
	I0829 19:19:33.884021   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:33.884028   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:33.884032   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:33.889882   29935 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0829 19:19:33.896061   29935 system_pods.go:59] 17 kube-system pods found
	I0829 19:19:33.896086   29935 system_pods.go:61] "coredns-6f6b679f8f-bqqq5" [801d9cfa-e1ad-4b31-9803-0030543fdc9e] Running
	I0829 19:19:33.896090   29935 system_pods.go:61] "coredns-6f6b679f8f-qjgfg" [12168097-2d3c-467a-b4b5-c0ca7f85e4eb] Running
	I0829 19:19:33.896094   29935 system_pods.go:61] "etcd-ha-505269" [a9cd644c-66f8-419a-be0c-615fc97daf18] Running
	I0829 19:19:33.896098   29935 system_pods.go:61] "etcd-ha-505269-m02" [864d2e94-62a9-4171-87bc-7ec5a3fc6224] Running
	I0829 19:19:33.896101   29935 system_pods.go:61] "kindnet-7rp6z" [7c922b32-e666-4b00-ab65-505632346112] Running
	I0829 19:19:33.896105   29935 system_pods.go:61] "kindnet-sthc8" [3c5a7487-a1b8-4acc-9462-84a2b478f46b] Running
	I0829 19:19:33.896109   29935 system_pods.go:61] "kube-apiserver-ha-505269" [616e3cf5-709a-46a8-8d71-0e709d297ca0] Running
	I0829 19:19:33.896112   29935 system_pods.go:61] "kube-apiserver-ha-505269-m02" [8615f4df-4f47-451a-80c8-d50826a75738] Running
	I0829 19:19:33.896116   29935 system_pods.go:61] "kube-controller-manager-ha-505269" [3f81751f-e12f-4a70-a901-db586a66461e] Running
	I0829 19:19:33.896119   29935 system_pods.go:61] "kube-controller-manager-ha-505269-m02" [b0587260-4827-47eb-a3b7-afb5b1fad59b] Running
	I0829 19:19:33.896125   29935 system_pods.go:61] "kube-proxy-hx822" [e88a504e-122b-4609-a0cc-4ad3115b3e4e] Running
	I0829 19:19:33.896127   29935 system_pods.go:61] "kube-proxy-jxbdt" [e51729e9-d662-4ea2-9a4f-85f77b269dea] Running
	I0829 19:19:33.896133   29935 system_pods.go:61] "kube-scheduler-ha-505269" [c573cfd8-20ba-46ce-8c0f-b610240ab78d] Running
	I0829 19:19:33.896136   29935 system_pods.go:61] "kube-scheduler-ha-505269-m02" [ba4e7eec-baaa-4c92-84f2-ac50629fea20] Running
	I0829 19:19:33.896139   29935 system_pods.go:61] "kube-vip-ha-505269" [d1734801-9573-45b3-a4a0-9ac45c093b95] Running
	I0829 19:19:33.896143   29935 system_pods.go:61] "kube-vip-ha-505269-m02" [f33d8dab-fb6f-46cf-b508-1e0eae03cad2] Running
	I0829 19:19:33.896145   29935 system_pods.go:61] "storage-provisioner" [6b7cd00a-94da-4e42-b7ae-289aab759c4f] Running
	I0829 19:19:33.896151   29935 system_pods.go:74] duration metric: took 181.530307ms to wait for pod list to return data ...
	I0829 19:19:33.896160   29935 default_sa.go:34] waiting for default service account to be created ...
	I0829 19:19:34.084558   29935 request.go:632] Waited for 188.329888ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/default/serviceaccounts
	I0829 19:19:34.084607   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/default/serviceaccounts
	I0829 19:19:34.084612   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:34.084618   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:34.084622   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:34.088605   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:34.088813   29935 default_sa.go:45] found service account: "default"
	I0829 19:19:34.088827   29935 default_sa.go:55] duration metric: took 192.662195ms for default service account to be created ...
	I0829 19:19:34.088835   29935 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 19:19:34.284281   29935 request.go:632] Waited for 195.35955ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods
	I0829 19:19:34.284349   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods
	I0829 19:19:34.284356   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:34.284368   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:34.284378   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:34.289986   29935 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0829 19:19:34.295119   29935 system_pods.go:86] 17 kube-system pods found
	I0829 19:19:34.295144   29935 system_pods.go:89] "coredns-6f6b679f8f-bqqq5" [801d9cfa-e1ad-4b31-9803-0030543fdc9e] Running
	I0829 19:19:34.295149   29935 system_pods.go:89] "coredns-6f6b679f8f-qjgfg" [12168097-2d3c-467a-b4b5-c0ca7f85e4eb] Running
	I0829 19:19:34.295153   29935 system_pods.go:89] "etcd-ha-505269" [a9cd644c-66f8-419a-be0c-615fc97daf18] Running
	I0829 19:19:34.295158   29935 system_pods.go:89] "etcd-ha-505269-m02" [864d2e94-62a9-4171-87bc-7ec5a3fc6224] Running
	I0829 19:19:34.295162   29935 system_pods.go:89] "kindnet-7rp6z" [7c922b32-e666-4b00-ab65-505632346112] Running
	I0829 19:19:34.295166   29935 system_pods.go:89] "kindnet-sthc8" [3c5a7487-a1b8-4acc-9462-84a2b478f46b] Running
	I0829 19:19:34.295170   29935 system_pods.go:89] "kube-apiserver-ha-505269" [616e3cf5-709a-46a8-8d71-0e709d297ca0] Running
	I0829 19:19:34.295174   29935 system_pods.go:89] "kube-apiserver-ha-505269-m02" [8615f4df-4f47-451a-80c8-d50826a75738] Running
	I0829 19:19:34.295177   29935 system_pods.go:89] "kube-controller-manager-ha-505269" [3f81751f-e12f-4a70-a901-db586a66461e] Running
	I0829 19:19:34.295182   29935 system_pods.go:89] "kube-controller-manager-ha-505269-m02" [b0587260-4827-47eb-a3b7-afb5b1fad59b] Running
	I0829 19:19:34.295185   29935 system_pods.go:89] "kube-proxy-hx822" [e88a504e-122b-4609-a0cc-4ad3115b3e4e] Running
	I0829 19:19:34.295188   29935 system_pods.go:89] "kube-proxy-jxbdt" [e51729e9-d662-4ea2-9a4f-85f77b269dea] Running
	I0829 19:19:34.295192   29935 system_pods.go:89] "kube-scheduler-ha-505269" [c573cfd8-20ba-46ce-8c0f-b610240ab78d] Running
	I0829 19:19:34.295197   29935 system_pods.go:89] "kube-scheduler-ha-505269-m02" [ba4e7eec-baaa-4c92-84f2-ac50629fea20] Running
	I0829 19:19:34.295200   29935 system_pods.go:89] "kube-vip-ha-505269" [d1734801-9573-45b3-a4a0-9ac45c093b95] Running
	I0829 19:19:34.295206   29935 system_pods.go:89] "kube-vip-ha-505269-m02" [f33d8dab-fb6f-46cf-b508-1e0eae03cad2] Running
	I0829 19:19:34.295209   29935 system_pods.go:89] "storage-provisioner" [6b7cd00a-94da-4e42-b7ae-289aab759c4f] Running
	I0829 19:19:34.295215   29935 system_pods.go:126] duration metric: took 206.371606ms to wait for k8s-apps to be running ...
	I0829 19:19:34.295225   29935 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 19:19:34.295268   29935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:19:34.311065   29935 system_svc.go:56] duration metric: took 15.831595ms WaitForService to wait for kubelet
	I0829 19:19:34.311100   29935 kubeadm.go:582] duration metric: took 21.675585259s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:19:34.311123   29935 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:19:34.484496   29935 request.go:632] Waited for 173.285409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes
	I0829 19:19:34.484555   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes
	I0829 19:19:34.484560   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:34.484568   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:34.484571   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:34.488457   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:34.489152   29935 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:19:34.489175   29935 node_conditions.go:123] node cpu capacity is 2
	I0829 19:19:34.489184   29935 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:19:34.489189   29935 node_conditions.go:123] node cpu capacity is 2
	I0829 19:19:34.489193   29935 node_conditions.go:105] duration metric: took 178.064585ms to run NodePressure ...
	I0829 19:19:34.489204   29935 start.go:241] waiting for startup goroutines ...
	I0829 19:19:34.489228   29935 start.go:255] writing updated cluster config ...
	I0829 19:19:34.491344   29935 out.go:201] 
	I0829 19:19:34.492757   29935 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:19:34.492850   29935 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/config.json ...
	I0829 19:19:34.494583   29935 out.go:177] * Starting "ha-505269-m03" control-plane node in "ha-505269" cluster
	I0829 19:19:34.495772   29935 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:19:34.495797   29935 cache.go:56] Caching tarball of preloaded images
	I0829 19:19:34.495907   29935 preload.go:172] Found /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 19:19:34.495920   29935 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 19:19:34.496003   29935 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/config.json ...
	I0829 19:19:34.496169   29935 start.go:360] acquireMachinesLock for ha-505269-m03: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:19:34.496212   29935 start.go:364] duration metric: took 25.021µs to acquireMachinesLock for "ha-505269-m03"
	I0829 19:19:34.496231   29935 start.go:93] Provisioning new machine with config: &{Name:ha-505269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-505269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:19:34.496318   29935 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0829 19:19:34.497674   29935 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 19:19:34.497749   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:19:34.497779   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:19:34.513140   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40899
	I0829 19:19:34.513609   29935 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:19:34.514070   29935 main.go:141] libmachine: Using API Version  1
	I0829 19:19:34.514096   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:19:34.514435   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:19:34.514610   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetMachineName
	I0829 19:19:34.514815   29935 main.go:141] libmachine: (ha-505269-m03) Calling .DriverName
	I0829 19:19:34.514990   29935 start.go:159] libmachine.API.Create for "ha-505269" (driver="kvm2")
	I0829 19:19:34.515017   29935 client.go:168] LocalClient.Create starting
	I0829 19:19:34.515054   29935 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem
	I0829 19:19:34.515091   29935 main.go:141] libmachine: Decoding PEM data...
	I0829 19:19:34.515117   29935 main.go:141] libmachine: Parsing certificate...
	I0829 19:19:34.515171   29935 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem
	I0829 19:19:34.515190   29935 main.go:141] libmachine: Decoding PEM data...
	I0829 19:19:34.515200   29935 main.go:141] libmachine: Parsing certificate...
	I0829 19:19:34.515214   29935 main.go:141] libmachine: Running pre-create checks...
	I0829 19:19:34.515221   29935 main.go:141] libmachine: (ha-505269-m03) Calling .PreCreateCheck
	I0829 19:19:34.515379   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetConfigRaw
	I0829 19:19:34.515773   29935 main.go:141] libmachine: Creating machine...
	I0829 19:19:34.515791   29935 main.go:141] libmachine: (ha-505269-m03) Calling .Create
	I0829 19:19:34.515960   29935 main.go:141] libmachine: (ha-505269-m03) Creating KVM machine...
	I0829 19:19:34.517392   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found existing default KVM network
	I0829 19:19:34.517528   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found existing private KVM network mk-ha-505269
	I0829 19:19:34.517662   29935 main.go:141] libmachine: (ha-505269-m03) Setting up store path in /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03 ...
	I0829 19:19:34.517679   29935 main.go:141] libmachine: (ha-505269-m03) Building disk image from file:///home/jenkins/minikube-integration/19530-11185/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso
	I0829 19:19:34.517777   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:34.517674   31162 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 19:19:34.517832   29935 main.go:141] libmachine: (ha-505269-m03) Downloading /home/jenkins/minikube-integration/19530-11185/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19530-11185/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso...
	I0829 19:19:34.754916   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:34.754791   31162 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/id_rsa...
	I0829 19:19:35.034973   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:35.034871   31162 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/ha-505269-m03.rawdisk...
	I0829 19:19:35.035002   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Writing magic tar header
	I0829 19:19:35.035012   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Writing SSH key tar header
	I0829 19:19:35.035027   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:35.034975   31162 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03 ...
	I0829 19:19:35.035107   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03
	I0829 19:19:35.035128   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube/machines
	I0829 19:19:35.035137   29935 main.go:141] libmachine: (ha-505269-m03) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03 (perms=drwx------)
	I0829 19:19:35.035149   29935 main.go:141] libmachine: (ha-505269-m03) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube/machines (perms=drwxr-xr-x)
	I0829 19:19:35.035160   29935 main.go:141] libmachine: (ha-505269-m03) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube (perms=drwxr-xr-x)
	I0829 19:19:35.035178   29935 main.go:141] libmachine: (ha-505269-m03) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185 (perms=drwxrwxr-x)
	I0829 19:19:35.035190   29935 main.go:141] libmachine: (ha-505269-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0829 19:19:35.035205   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 19:19:35.035218   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185
	I0829 19:19:35.035231   29935 main.go:141] libmachine: (ha-505269-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0829 19:19:35.035246   29935 main.go:141] libmachine: (ha-505269-m03) Creating domain...
	I0829 19:19:35.035278   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0829 19:19:35.035305   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Checking permissions on dir: /home/jenkins
	I0829 19:19:35.035315   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Checking permissions on dir: /home
	I0829 19:19:35.035326   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Skipping /home - not owner
	I0829 19:19:35.036291   29935 main.go:141] libmachine: (ha-505269-m03) define libvirt domain using xml: 
	I0829 19:19:35.036315   29935 main.go:141] libmachine: (ha-505269-m03) <domain type='kvm'>
	I0829 19:19:35.036327   29935 main.go:141] libmachine: (ha-505269-m03)   <name>ha-505269-m03</name>
	I0829 19:19:35.036343   29935 main.go:141] libmachine: (ha-505269-m03)   <memory unit='MiB'>2200</memory>
	I0829 19:19:35.036355   29935 main.go:141] libmachine: (ha-505269-m03)   <vcpu>2</vcpu>
	I0829 19:19:35.036365   29935 main.go:141] libmachine: (ha-505269-m03)   <features>
	I0829 19:19:35.036371   29935 main.go:141] libmachine: (ha-505269-m03)     <acpi/>
	I0829 19:19:35.036376   29935 main.go:141] libmachine: (ha-505269-m03)     <apic/>
	I0829 19:19:35.036382   29935 main.go:141] libmachine: (ha-505269-m03)     <pae/>
	I0829 19:19:35.036389   29935 main.go:141] libmachine: (ha-505269-m03)     
	I0829 19:19:35.036394   29935 main.go:141] libmachine: (ha-505269-m03)   </features>
	I0829 19:19:35.036399   29935 main.go:141] libmachine: (ha-505269-m03)   <cpu mode='host-passthrough'>
	I0829 19:19:35.036426   29935 main.go:141] libmachine: (ha-505269-m03)   
	I0829 19:19:35.036449   29935 main.go:141] libmachine: (ha-505269-m03)   </cpu>
	I0829 19:19:35.036460   29935 main.go:141] libmachine: (ha-505269-m03)   <os>
	I0829 19:19:35.036474   29935 main.go:141] libmachine: (ha-505269-m03)     <type>hvm</type>
	I0829 19:19:35.036486   29935 main.go:141] libmachine: (ha-505269-m03)     <boot dev='cdrom'/>
	I0829 19:19:35.036493   29935 main.go:141] libmachine: (ha-505269-m03)     <boot dev='hd'/>
	I0829 19:19:35.036504   29935 main.go:141] libmachine: (ha-505269-m03)     <bootmenu enable='no'/>
	I0829 19:19:35.036510   29935 main.go:141] libmachine: (ha-505269-m03)   </os>
	I0829 19:19:35.036520   29935 main.go:141] libmachine: (ha-505269-m03)   <devices>
	I0829 19:19:35.036532   29935 main.go:141] libmachine: (ha-505269-m03)     <disk type='file' device='cdrom'>
	I0829 19:19:35.036549   29935 main.go:141] libmachine: (ha-505269-m03)       <source file='/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/boot2docker.iso'/>
	I0829 19:19:35.036564   29935 main.go:141] libmachine: (ha-505269-m03)       <target dev='hdc' bus='scsi'/>
	I0829 19:19:35.036580   29935 main.go:141] libmachine: (ha-505269-m03)       <readonly/>
	I0829 19:19:35.036590   29935 main.go:141] libmachine: (ha-505269-m03)     </disk>
	I0829 19:19:35.036601   29935 main.go:141] libmachine: (ha-505269-m03)     <disk type='file' device='disk'>
	I0829 19:19:35.036614   29935 main.go:141] libmachine: (ha-505269-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0829 19:19:35.036629   29935 main.go:141] libmachine: (ha-505269-m03)       <source file='/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/ha-505269-m03.rawdisk'/>
	I0829 19:19:35.036644   29935 main.go:141] libmachine: (ha-505269-m03)       <target dev='hda' bus='virtio'/>
	I0829 19:19:35.036655   29935 main.go:141] libmachine: (ha-505269-m03)     </disk>
	I0829 19:19:35.036666   29935 main.go:141] libmachine: (ha-505269-m03)     <interface type='network'>
	I0829 19:19:35.036679   29935 main.go:141] libmachine: (ha-505269-m03)       <source network='mk-ha-505269'/>
	I0829 19:19:35.036690   29935 main.go:141] libmachine: (ha-505269-m03)       <model type='virtio'/>
	I0829 19:19:35.036698   29935 main.go:141] libmachine: (ha-505269-m03)     </interface>
	I0829 19:19:35.036709   29935 main.go:141] libmachine: (ha-505269-m03)     <interface type='network'>
	I0829 19:19:35.036734   29935 main.go:141] libmachine: (ha-505269-m03)       <source network='default'/>
	I0829 19:19:35.036754   29935 main.go:141] libmachine: (ha-505269-m03)       <model type='virtio'/>
	I0829 19:19:35.036761   29935 main.go:141] libmachine: (ha-505269-m03)     </interface>
	I0829 19:19:35.036771   29935 main.go:141] libmachine: (ha-505269-m03)     <serial type='pty'>
	I0829 19:19:35.036784   29935 main.go:141] libmachine: (ha-505269-m03)       <target port='0'/>
	I0829 19:19:35.036794   29935 main.go:141] libmachine: (ha-505269-m03)     </serial>
	I0829 19:19:35.036806   29935 main.go:141] libmachine: (ha-505269-m03)     <console type='pty'>
	I0829 19:19:35.036817   29935 main.go:141] libmachine: (ha-505269-m03)       <target type='serial' port='0'/>
	I0829 19:19:35.036838   29935 main.go:141] libmachine: (ha-505269-m03)     </console>
	I0829 19:19:35.036855   29935 main.go:141] libmachine: (ha-505269-m03)     <rng model='virtio'>
	I0829 19:19:35.036870   29935 main.go:141] libmachine: (ha-505269-m03)       <backend model='random'>/dev/random</backend>
	I0829 19:19:35.036882   29935 main.go:141] libmachine: (ha-505269-m03)     </rng>
	I0829 19:19:35.036894   29935 main.go:141] libmachine: (ha-505269-m03)     
	I0829 19:19:35.036908   29935 main.go:141] libmachine: (ha-505269-m03)     
	I0829 19:19:35.036919   29935 main.go:141] libmachine: (ha-505269-m03)   </devices>
	I0829 19:19:35.036928   29935 main.go:141] libmachine: (ha-505269-m03) </domain>
	I0829 19:19:35.036938   29935 main.go:141] libmachine: (ha-505269-m03) 
	I0829 19:19:35.044327   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:d2:e6:ee in network default
	I0829 19:19:35.044995   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:35.045016   29935 main.go:141] libmachine: (ha-505269-m03) Ensuring networks are active...
	I0829 19:19:35.045932   29935 main.go:141] libmachine: (ha-505269-m03) Ensuring network default is active
	I0829 19:19:35.046213   29935 main.go:141] libmachine: (ha-505269-m03) Ensuring network mk-ha-505269 is active
	I0829 19:19:35.046880   29935 main.go:141] libmachine: (ha-505269-m03) Getting domain xml...
	I0829 19:19:35.047792   29935 main.go:141] libmachine: (ha-505269-m03) Creating domain...
	I0829 19:19:36.274511   29935 main.go:141] libmachine: (ha-505269-m03) Waiting to get IP...
	I0829 19:19:36.275284   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:36.275653   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find current IP address of domain ha-505269-m03 in network mk-ha-505269
	I0829 19:19:36.275706   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:36.275639   31162 retry.go:31] will retry after 188.20001ms: waiting for machine to come up
	I0829 19:19:36.465046   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:36.465597   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find current IP address of domain ha-505269-m03 in network mk-ha-505269
	I0829 19:19:36.465624   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:36.465548   31162 retry.go:31] will retry after 377.645185ms: waiting for machine to come up
	I0829 19:19:36.845154   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:36.845635   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find current IP address of domain ha-505269-m03 in network mk-ha-505269
	I0829 19:19:36.845661   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:36.845594   31162 retry.go:31] will retry after 347.332502ms: waiting for machine to come up
	I0829 19:19:37.194034   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:37.194449   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find current IP address of domain ha-505269-m03 in network mk-ha-505269
	I0829 19:19:37.194477   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:37.194403   31162 retry.go:31] will retry after 437.184773ms: waiting for machine to come up
	I0829 19:19:37.632850   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:37.633345   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find current IP address of domain ha-505269-m03 in network mk-ha-505269
	I0829 19:19:37.633373   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:37.633290   31162 retry.go:31] will retry after 668.581024ms: waiting for machine to come up
	I0829 19:19:38.302978   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:38.303421   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find current IP address of domain ha-505269-m03 in network mk-ha-505269
	I0829 19:19:38.303449   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:38.303345   31162 retry.go:31] will retry after 789.404428ms: waiting for machine to come up
	I0829 19:19:39.094663   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:39.095132   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find current IP address of domain ha-505269-m03 in network mk-ha-505269
	I0829 19:19:39.095159   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:39.095084   31162 retry.go:31] will retry after 835.70112ms: waiting for machine to come up
	I0829 19:19:39.932361   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:39.932837   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find current IP address of domain ha-505269-m03 in network mk-ha-505269
	I0829 19:19:39.932863   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:39.932799   31162 retry.go:31] will retry after 963.297624ms: waiting for machine to come up
	I0829 19:19:40.897752   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:40.898217   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find current IP address of domain ha-505269-m03 in network mk-ha-505269
	I0829 19:19:40.898246   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:40.898130   31162 retry.go:31] will retry after 1.412076203s: waiting for machine to come up
	I0829 19:19:42.311273   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:42.311695   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find current IP address of domain ha-505269-m03 in network mk-ha-505269
	I0829 19:19:42.311736   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:42.311680   31162 retry.go:31] will retry after 2.08425845s: waiting for machine to come up
	I0829 19:19:44.398798   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:44.399233   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find current IP address of domain ha-505269-m03 in network mk-ha-505269
	I0829 19:19:44.399261   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:44.399180   31162 retry.go:31] will retry after 2.054798813s: waiting for machine to come up
	I0829 19:19:46.457039   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:46.457497   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find current IP address of domain ha-505269-m03 in network mk-ha-505269
	I0829 19:19:46.457524   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:46.457442   31162 retry.go:31] will retry after 3.122897743s: waiting for machine to come up
	I0829 19:19:49.582281   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:49.582738   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find current IP address of domain ha-505269-m03 in network mk-ha-505269
	I0829 19:19:49.582760   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:49.582697   31162 retry.go:31] will retry after 4.485998189s: waiting for machine to come up
	I0829 19:19:54.071658   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:54.072099   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find current IP address of domain ha-505269-m03 in network mk-ha-505269
	I0829 19:19:54.072122   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:54.072050   31162 retry.go:31] will retry after 5.029713513s: waiting for machine to come up
	I0829 19:19:59.107198   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:59.107710   29935 main.go:141] libmachine: (ha-505269-m03) Found IP for machine: 192.168.39.178
	I0829 19:19:59.107734   29935 main.go:141] libmachine: (ha-505269-m03) Reserving static IP address...
	I0829 19:19:59.107750   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has current primary IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:59.108024   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find host DHCP lease matching {name: "ha-505269-m03", mac: "52:54:00:19:9f:90", ip: "192.168.39.178"} in network mk-ha-505269
	I0829 19:19:59.181621   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Getting to WaitForSSH function...
	I0829 19:19:59.181650   29935 main.go:141] libmachine: (ha-505269-m03) Reserved static IP address: 192.168.39.178
	I0829 19:19:59.181664   29935 main.go:141] libmachine: (ha-505269-m03) Waiting for SSH to be available...
	I0829 19:19:59.184315   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:59.184871   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269
	I0829 19:19:59.184905   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find defined IP address of network mk-ha-505269 interface with MAC address 52:54:00:19:9f:90
	I0829 19:19:59.185069   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Using SSH client type: external
	I0829 19:19:59.185089   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/id_rsa (-rw-------)
	I0829 19:19:59.185116   29935 main.go:141] libmachine: (ha-505269-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:19:59.185133   29935 main.go:141] libmachine: (ha-505269-m03) DBG | About to run SSH command:
	I0829 19:19:59.185147   29935 main.go:141] libmachine: (ha-505269-m03) DBG | exit 0
	I0829 19:19:59.189624   29935 main.go:141] libmachine: (ha-505269-m03) DBG | SSH cmd err, output: exit status 255: 
	I0829 19:19:59.189654   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0829 19:19:59.189680   29935 main.go:141] libmachine: (ha-505269-m03) DBG | command : exit 0
	I0829 19:19:59.189697   29935 main.go:141] libmachine: (ha-505269-m03) DBG | err     : exit status 255
	I0829 19:19:59.189730   29935 main.go:141] libmachine: (ha-505269-m03) DBG | output  : 
	I0829 19:20:02.192226   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Getting to WaitForSSH function...
	I0829 19:20:02.194840   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.195264   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:02.195313   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.195430   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Using SSH client type: external
	I0829 19:20:02.195451   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/id_rsa (-rw-------)
	I0829 19:20:02.195479   29935 main.go:141] libmachine: (ha-505269-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.178 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:20:02.195497   29935 main.go:141] libmachine: (ha-505269-m03) DBG | About to run SSH command:
	I0829 19:20:02.195511   29935 main.go:141] libmachine: (ha-505269-m03) DBG | exit 0
	I0829 19:20:02.318896   29935 main.go:141] libmachine: (ha-505269-m03) DBG | SSH cmd err, output: <nil>: 
	I0829 19:20:02.319159   29935 main.go:141] libmachine: (ha-505269-m03) KVM machine creation complete!
	I0829 19:20:02.319514   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetConfigRaw
	I0829 19:20:02.320111   29935 main.go:141] libmachine: (ha-505269-m03) Calling .DriverName
	I0829 19:20:02.320288   29935 main.go:141] libmachine: (ha-505269-m03) Calling .DriverName
	I0829 19:20:02.320462   29935 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0829 19:20:02.320475   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetState
	I0829 19:20:02.321723   29935 main.go:141] libmachine: Detecting operating system of created instance...
	I0829 19:20:02.321741   29935 main.go:141] libmachine: Waiting for SSH to be available...
	I0829 19:20:02.321750   29935 main.go:141] libmachine: Getting to WaitForSSH function...
	I0829 19:20:02.321758   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:20:02.323988   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.324379   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:02.324406   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.324527   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:20:02.324708   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:02.324871   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:02.325007   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:20:02.325169   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:20:02.325450   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0829 19:20:02.325469   29935 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0829 19:20:02.426012   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:20:02.426032   29935 main.go:141] libmachine: Detecting the provisioner...
	I0829 19:20:02.426042   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:20:02.428681   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.429120   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:02.429150   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.429347   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:20:02.429550   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:02.429762   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:02.429961   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:20:02.430167   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:20:02.430373   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0829 19:20:02.430389   29935 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0829 19:20:02.531776   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0829 19:20:02.531840   29935 main.go:141] libmachine: found compatible host: buildroot
	I0829 19:20:02.531850   29935 main.go:141] libmachine: Provisioning with buildroot...
	I0829 19:20:02.531865   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetMachineName
	I0829 19:20:02.532087   29935 buildroot.go:166] provisioning hostname "ha-505269-m03"
	I0829 19:20:02.532119   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetMachineName
	I0829 19:20:02.532285   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:20:02.534809   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.535097   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:02.535119   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.535247   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:20:02.535448   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:02.535598   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:02.535740   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:20:02.535901   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:20:02.536117   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0829 19:20:02.536134   29935 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-505269-m03 && echo "ha-505269-m03" | sudo tee /etc/hostname
	I0829 19:20:02.654141   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-505269-m03
	
	I0829 19:20:02.654173   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:20:02.657113   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.657449   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:02.657469   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.657658   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:20:02.657833   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:02.657980   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:02.658091   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:20:02.658230   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:20:02.658457   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0829 19:20:02.658475   29935 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-505269-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-505269-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-505269-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:20:02.768285   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:20:02.768314   29935 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 19:20:02.768329   29935 buildroot.go:174] setting up certificates
	I0829 19:20:02.768338   29935 provision.go:84] configureAuth start
	I0829 19:20:02.768345   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetMachineName
	I0829 19:20:02.768666   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetIP
	I0829 19:20:02.771257   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.771586   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:02.771626   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.771805   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:20:02.773889   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.774291   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:02.774322   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.774561   29935 provision.go:143] copyHostCerts
	I0829 19:20:02.774604   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 19:20:02.774648   29935 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 19:20:02.774660   29935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 19:20:02.774743   29935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 19:20:02.774837   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 19:20:02.774860   29935 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 19:20:02.774867   29935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 19:20:02.774911   29935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 19:20:02.774977   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 19:20:02.775001   29935 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 19:20:02.775009   29935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 19:20:02.775042   29935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 19:20:02.775106   29935 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.ha-505269-m03 san=[127.0.0.1 192.168.39.178 ha-505269-m03 localhost minikube]
	I0829 19:20:02.955882   29935 provision.go:177] copyRemoteCerts
	I0829 19:20:02.955945   29935 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:20:02.955974   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:20:02.958280   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.958603   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:02.958635   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.958788   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:20:02.958970   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:02.959130   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:20:02.959302   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/id_rsa Username:docker}
	I0829 19:20:03.042340   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0829 19:20:03.042415   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0829 19:20:03.068167   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0829 19:20:03.068240   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 19:20:03.092191   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0829 19:20:03.092268   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 19:20:03.116038   29935 provision.go:87] duration metric: took 347.690012ms to configureAuth
	I0829 19:20:03.116064   29935 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:20:03.116313   29935 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:20:03.116404   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:20:03.118908   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.119293   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:03.119314   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.119534   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:20:03.119724   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:03.119905   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:03.120053   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:20:03.120217   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:20:03.120393   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0829 19:20:03.120413   29935 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:20:03.336641   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:20:03.336685   29935 main.go:141] libmachine: Checking connection to Docker...
	I0829 19:20:03.336696   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetURL
	I0829 19:20:03.337817   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Using libvirt version 6000000
	I0829 19:20:03.340349   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.340747   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:03.340780   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.340980   29935 main.go:141] libmachine: Docker is up and running!
	I0829 19:20:03.340998   29935 main.go:141] libmachine: Reticulating splines...
	I0829 19:20:03.341006   29935 client.go:171] duration metric: took 28.825978143s to LocalClient.Create
	I0829 19:20:03.341032   29935 start.go:167] duration metric: took 28.826042143s to libmachine.API.Create "ha-505269"
	I0829 19:20:03.341043   29935 start.go:293] postStartSetup for "ha-505269-m03" (driver="kvm2")
	I0829 19:20:03.341052   29935 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:20:03.341069   29935 main.go:141] libmachine: (ha-505269-m03) Calling .DriverName
	I0829 19:20:03.341306   29935 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:20:03.341329   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:20:03.343890   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.344216   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:03.344238   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.344425   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:20:03.344631   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:03.344819   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:20:03.344999   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/id_rsa Username:docker}
	I0829 19:20:03.426048   29935 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:20:03.430609   29935 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:20:03.430634   29935 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 19:20:03.430691   29935 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 19:20:03.430759   29935 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 19:20:03.430769   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> /etc/ssl/certs/183612.pem
	I0829 19:20:03.430848   29935 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:20:03.440759   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 19:20:03.466905   29935 start.go:296] duration metric: took 125.851382ms for postStartSetup
	I0829 19:20:03.466952   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetConfigRaw
	I0829 19:20:03.467486   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetIP
	I0829 19:20:03.469857   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.470233   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:03.470258   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.470569   29935 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/config.json ...
	I0829 19:20:03.470776   29935 start.go:128] duration metric: took 28.974446004s to createHost
	I0829 19:20:03.470798   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:20:03.473302   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.473693   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:03.473717   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.473838   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:20:03.474033   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:03.474172   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:03.474321   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:20:03.474469   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:20:03.474659   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0829 19:20:03.474670   29935 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:20:03.579542   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724959203.561300378
	
	I0829 19:20:03.579563   29935 fix.go:216] guest clock: 1724959203.561300378
	I0829 19:20:03.579570   29935 fix.go:229] Guest: 2024-08-29 19:20:03.561300378 +0000 UTC Remote: 2024-08-29 19:20:03.470788126 +0000 UTC m=+155.545240327 (delta=90.512252ms)
	I0829 19:20:03.579584   29935 fix.go:200] guest clock delta is within tolerance: 90.512252ms
	I0829 19:20:03.579590   29935 start.go:83] releasing machines lock for "ha-505269-m03", held for 29.083368859s
	I0829 19:20:03.579612   29935 main.go:141] libmachine: (ha-505269-m03) Calling .DriverName
	I0829 19:20:03.579904   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetIP
	I0829 19:20:03.582421   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.582784   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:03.582812   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.584850   29935 out.go:177] * Found network options:
	I0829 19:20:03.586360   29935 out.go:177]   - NO_PROXY=192.168.39.56,192.168.39.68
	W0829 19:20:03.587710   29935 proxy.go:119] fail to check proxy env: Error ip not in block
	W0829 19:20:03.587727   29935 proxy.go:119] fail to check proxy env: Error ip not in block
	I0829 19:20:03.587738   29935 main.go:141] libmachine: (ha-505269-m03) Calling .DriverName
	I0829 19:20:03.588190   29935 main.go:141] libmachine: (ha-505269-m03) Calling .DriverName
	I0829 19:20:03.588357   29935 main.go:141] libmachine: (ha-505269-m03) Calling .DriverName
	I0829 19:20:03.588449   29935 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:20:03.588483   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	W0829 19:20:03.588489   29935 proxy.go:119] fail to check proxy env: Error ip not in block
	W0829 19:20:03.588506   29935 proxy.go:119] fail to check proxy env: Error ip not in block
	I0829 19:20:03.588571   29935 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:20:03.588585   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:20:03.591026   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.591256   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.591415   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:03.591434   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.591616   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:20:03.591734   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:03.591760   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.591787   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:03.591906   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:20:03.591977   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:20:03.592055   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:03.592126   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/id_rsa Username:docker}
	I0829 19:20:03.592210   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:20:03.592323   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/id_rsa Username:docker}
	I0829 19:20:03.822284   29935 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:20:03.829574   29935 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:20:03.829646   29935 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:20:03.845095   29935 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:20:03.845117   29935 start.go:495] detecting cgroup driver to use...
	I0829 19:20:03.845169   29935 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:20:03.861537   29935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:20:03.875877   29935 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:20:03.875931   29935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:20:03.889868   29935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:20:03.903770   29935 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:20:04.019620   29935 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:20:04.156255   29935 docker.go:233] disabling docker service ...
	I0829 19:20:04.156321   29935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:20:04.170173   29935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:20:04.183474   29935 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:20:04.331947   29935 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:20:04.448294   29935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:20:04.463547   29935 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:20:04.481774   29935 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:20:04.481831   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:20:04.492184   29935 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:20:04.492259   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:20:04.502875   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:20:04.513205   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:20:04.524936   29935 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:20:04.536110   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:20:04.547762   29935 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:20:04.566081   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:20:04.577901   29935 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:20:04.588801   29935 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:20:04.588905   29935 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:20:04.604220   29935 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:20:04.615471   29935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:20:04.733353   29935 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:20:04.822230   29935 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:20:04.822306   29935 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:20:04.827551   29935 start.go:563] Will wait 60s for crictl version
	I0829 19:20:04.827605   29935 ssh_runner.go:195] Run: which crictl
	I0829 19:20:04.831455   29935 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:20:04.873126   29935 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:20:04.873208   29935 ssh_runner.go:195] Run: crio --version
	I0829 19:20:04.906487   29935 ssh_runner.go:195] Run: crio --version
	I0829 19:20:04.942315   29935 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:20:04.943637   29935 out.go:177]   - env NO_PROXY=192.168.39.56
	I0829 19:20:04.944984   29935 out.go:177]   - env NO_PROXY=192.168.39.56,192.168.39.68
	I0829 19:20:04.946328   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetIP
	I0829 19:20:04.948949   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:04.949286   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:04.949316   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:04.949508   29935 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 19:20:04.953593   29935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:20:04.966965   29935 mustload.go:65] Loading cluster: ha-505269
	I0829 19:20:04.967207   29935 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:20:04.967467   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:20:04.967500   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:20:04.981971   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34151
	I0829 19:20:04.982419   29935 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:20:04.982930   29935 main.go:141] libmachine: Using API Version  1
	I0829 19:20:04.982951   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:20:04.983267   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:20:04.983453   29935 main.go:141] libmachine: (ha-505269) Calling .GetState
	I0829 19:20:04.985106   29935 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:20:04.985385   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:20:04.985418   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:20:05.000699   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41629
	I0829 19:20:05.001114   29935 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:20:05.001584   29935 main.go:141] libmachine: Using API Version  1
	I0829 19:20:05.001608   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:20:05.001896   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:20:05.002065   29935 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:20:05.002305   29935 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269 for IP: 192.168.39.178
	I0829 19:20:05.002316   29935 certs.go:194] generating shared ca certs ...
	I0829 19:20:05.002330   29935 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:20:05.002440   29935 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 19:20:05.002485   29935 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 19:20:05.002494   29935 certs.go:256] generating profile certs ...
	I0829 19:20:05.002584   29935 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.key
	I0829 19:20:05.002609   29935 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.7f783c4d
	I0829 19:20:05.002623   29935 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.7f783c4d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.56 192.168.39.68 192.168.39.178 192.168.39.254]
	I0829 19:20:05.090673   29935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.7f783c4d ...
	I0829 19:20:05.090706   29935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.7f783c4d: {Name:mke661f346de8e27968b55f74b54bad926566b3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:20:05.090869   29935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.7f783c4d ...
	I0829 19:20:05.090883   29935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.7f783c4d: {Name:mk123b199a9849df290cd1ac008da6743489b006 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:20:05.090954   29935 certs.go:381] copying /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.7f783c4d -> /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt
	I0829 19:20:05.091086   29935 certs.go:385] copying /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.7f783c4d -> /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key
	I0829 19:20:05.091203   29935 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.key
	I0829 19:20:05.091218   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0829 19:20:05.091231   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0829 19:20:05.091241   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0829 19:20:05.091253   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0829 19:20:05.091263   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0829 19:20:05.091275   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0829 19:20:05.091288   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0829 19:20:05.091301   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0829 19:20:05.091346   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 19:20:05.091373   29935 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 19:20:05.091382   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 19:20:05.091404   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 19:20:05.091427   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:20:05.091448   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 19:20:05.091483   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 19:20:05.091509   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:20:05.091523   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem -> /usr/share/ca-certificates/18361.pem
	I0829 19:20:05.091536   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> /usr/share/ca-certificates/183612.pem
	I0829 19:20:05.091572   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:20:05.094798   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:20:05.095226   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:20:05.095253   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:20:05.095500   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:20:05.095748   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:20:05.095950   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:20:05.096072   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:20:05.174923   29935 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0829 19:20:05.180247   29935 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0829 19:20:05.191966   29935 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0829 19:20:05.196198   29935 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0829 19:20:05.206867   29935 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0829 19:20:05.211046   29935 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0829 19:20:05.223089   29935 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0829 19:20:05.227637   29935 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0829 19:20:05.238270   29935 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0829 19:20:05.242359   29935 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0829 19:20:05.252340   29935 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0829 19:20:05.256330   29935 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0829 19:20:05.270874   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:20:05.295272   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 19:20:05.321764   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:20:05.348478   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:20:05.375369   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0829 19:20:05.400128   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 19:20:05.423436   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:20:05.446550   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:20:05.469821   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:20:05.495816   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 19:20:05.521949   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 19:20:05.546193   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0829 19:20:05.562740   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0829 19:20:05.580138   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0829 19:20:05.596539   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0829 19:20:05.614977   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0829 19:20:05.632443   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0829 19:20:05.649051   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0829 19:20:05.665290   29935 ssh_runner.go:195] Run: openssl version
	I0829 19:20:05.671720   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:20:05.682597   29935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:20:05.687382   29935 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:20:05.687430   29935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:20:05.693220   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:20:05.704194   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 19:20:05.715666   29935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 19:20:05.720399   29935 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 19:20:05.720509   29935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 19:20:05.726123   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 19:20:05.737940   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 19:20:05.749374   29935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 19:20:05.753711   29935 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 19:20:05.753764   29935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 19:20:05.759277   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:20:05.775925   29935 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:20:05.781231   29935 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 19:20:05.781290   29935 kubeadm.go:934] updating node {m03 192.168.39.178 8443 v1.31.0 crio true true} ...
	I0829 19:20:05.781368   29935 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-505269-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.178
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-505269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:20:05.781391   29935 kube-vip.go:115] generating kube-vip config ...
	I0829 19:20:05.781422   29935 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0829 19:20:05.799179   29935 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0829 19:20:05.799233   29935 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0829 19:20:05.799281   29935 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:20:05.809020   29935 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0829 19:20:05.809071   29935 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0829 19:20:05.820291   29935 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0829 19:20:05.820323   29935 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0829 19:20:05.820338   29935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:20:05.820346   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0829 19:20:05.820293   29935 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0829 19:20:05.820401   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0829 19:20:05.820428   29935 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0829 19:20:05.820460   29935 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0829 19:20:05.840900   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0829 19:20:05.840951   29935 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0829 19:20:05.840972   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0829 19:20:05.841004   29935 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0829 19:20:05.841032   29935 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0829 19:20:05.841065   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0829 19:20:05.851928   29935 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0829 19:20:05.851958   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0829 19:20:06.705142   29935 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0829 19:20:06.714772   29935 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0829 19:20:06.732182   29935 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:20:06.748618   29935 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0829 19:20:06.765652   29935 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0829 19:20:06.769891   29935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:20:06.782268   29935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:20:06.910290   29935 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:20:06.930947   29935 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:20:06.931306   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:20:06.931352   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:20:06.947375   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44309
	I0829 19:20:06.947743   29935 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:20:06.948189   29935 main.go:141] libmachine: Using API Version  1
	I0829 19:20:06.948211   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:20:06.948488   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:20:06.948648   29935 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:20:06.948800   29935 start.go:317] joinCluster: &{Name:ha-505269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-505269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.178 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:20:06.948937   29935 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0829 19:20:06.948953   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:20:06.951940   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:20:06.952441   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:20:06.952468   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:20:06.952639   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:20:06.952802   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:20:06.952945   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:20:06.953064   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:20:07.125275   29935 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.178 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:20:07.125335   29935 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 6c593t.iltwpo34orwpj622 --discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-505269-m03 --control-plane --apiserver-advertise-address=192.168.39.178 --apiserver-bind-port=8443"
	I0829 19:20:28.856230   29935 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 6c593t.iltwpo34orwpj622 --discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-505269-m03 --control-plane --apiserver-advertise-address=192.168.39.178 --apiserver-bind-port=8443": (21.730873438s)
	I0829 19:20:28.856260   29935 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0829 19:20:29.389873   29935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-505269-m03 minikube.k8s.io/updated_at=2024_08_29T19_20_29_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033 minikube.k8s.io/name=ha-505269 minikube.k8s.io/primary=false
	I0829 19:20:29.528574   29935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-505269-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0829 19:20:29.657985   29935 start.go:319] duration metric: took 22.709181661s to joinCluster
	I0829 19:20:29.658072   29935 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.178 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:20:29.658455   29935 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:20:29.659354   29935 out.go:177] * Verifying Kubernetes components...
	I0829 19:20:29.660584   29935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:20:29.929819   29935 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:20:29.980574   29935 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 19:20:29.980908   29935 kapi.go:59] client config for ha-505269: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.crt", KeyFile:"/home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.key", CAFile:"/home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0829 19:20:29.981000   29935 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.56:8443
	I0829 19:20:29.981249   29935 node_ready.go:35] waiting up to 6m0s for node "ha-505269-m03" to be "Ready" ...
	I0829 19:20:29.981331   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:29.981343   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:29.981354   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:29.981364   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:29.984947   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:30.481420   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:30.481443   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:30.481453   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:30.481520   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:30.485150   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:30.982262   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:30.982295   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:30.982317   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:30.982321   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:30.986168   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:31.482355   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:31.482375   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:31.482385   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:31.482390   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:31.485877   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:31.982288   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:31.982311   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:31.982319   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:31.982324   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:31.985761   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:31.986377   29935 node_ready.go:53] node "ha-505269-m03" has status "Ready":"False"
	I0829 19:20:32.482114   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:32.482140   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:32.482151   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:32.482159   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:32.486263   29935 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 19:20:32.982272   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:32.982290   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:32.982298   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:32.982302   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:32.991256   29935 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0829 19:20:33.482440   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:33.482463   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:33.482470   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:33.482473   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:33.485571   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:33.981512   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:33.981537   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:33.981546   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:33.981551   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:33.984753   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:34.481902   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:34.481927   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:34.481939   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:34.481944   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:34.487732   29935 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0829 19:20:34.488532   29935 node_ready.go:53] node "ha-505269-m03" has status "Ready":"False"
	I0829 19:20:34.982434   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:34.982458   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:34.982468   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:34.982475   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:34.986117   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:35.481581   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:35.481606   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:35.481614   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:35.481619   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:35.485197   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:35.982264   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:35.982285   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:35.982293   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:35.982297   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:35.986014   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:36.482100   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:36.482126   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:36.482137   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:36.482144   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:36.485971   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:36.982458   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:36.982481   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:36.982491   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:36.982497   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:36.986178   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:36.986785   29935 node_ready.go:53] node "ha-505269-m03" has status "Ready":"False"
	I0829 19:20:37.481435   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:37.481458   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:37.481466   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:37.481471   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:37.484702   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:37.981802   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:37.981825   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:37.981835   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:37.981842   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:37.984991   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:38.481513   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:38.481535   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:38.481542   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:38.481547   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:38.484964   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:38.982198   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:38.982218   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:38.982226   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:38.982230   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:38.985989   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:39.481877   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:39.481908   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:39.481918   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:39.481924   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:39.486466   29935 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 19:20:39.487121   29935 node_ready.go:53] node "ha-505269-m03" has status "Ready":"False"
	I0829 19:20:39.981578   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:39.981600   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:39.981610   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:39.981616   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:39.985393   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:40.481690   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:40.481716   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:40.481728   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:40.481734   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:40.485374   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:40.981506   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:40.981527   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:40.981534   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:40.981540   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:40.984925   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:41.482093   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:41.482114   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:41.482122   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:41.482126   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:41.485704   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:41.981443   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:41.981467   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:41.981477   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:41.981482   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:41.985369   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:41.986006   29935 node_ready.go:53] node "ha-505269-m03" has status "Ready":"False"
	I0829 19:20:42.481835   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:42.481857   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:42.481866   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:42.481871   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:42.485076   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:42.982232   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:42.982254   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:42.982261   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:42.982265   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:42.985977   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:43.482410   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:43.482432   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:43.482441   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:43.482445   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:43.485535   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:43.981511   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:43.981532   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:43.981540   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:43.981544   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:43.984568   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:44.481554   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:44.481583   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:44.481594   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:44.481602   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:44.485552   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:44.486160   29935 node_ready.go:53] node "ha-505269-m03" has status "Ready":"False"
	I0829 19:20:44.982013   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:44.982040   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:44.982051   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:44.982057   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:44.985727   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:45.481987   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:45.482010   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:45.482021   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:45.482030   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:45.486809   29935 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 19:20:45.981824   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:45.981848   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:45.981858   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:45.981865   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:45.985791   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:46.481866   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:46.481886   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:46.481894   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:46.481897   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:46.484931   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:46.982223   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:46.982247   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:46.982255   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:46.982258   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:46.988590   29935 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0829 19:20:46.989158   29935 node_ready.go:53] node "ha-505269-m03" has status "Ready":"False"
	I0829 19:20:47.481941   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:47.481967   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:47.481978   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:47.481990   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:47.485439   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:47.981786   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:47.981809   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:47.981821   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:47.981827   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:47.985543   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:47.986213   29935 node_ready.go:49] node "ha-505269-m03" has status "Ready":"True"
	I0829 19:20:47.986234   29935 node_ready.go:38] duration metric: took 18.004967564s for node "ha-505269-m03" to be "Ready" ...
	I0829 19:20:47.986244   29935 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:20:47.986317   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods
	I0829 19:20:47.986327   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:47.986334   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:47.986344   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:47.995150   29935 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0829 19:20:48.001222   29935 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-bqqq5" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:48.001293   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-bqqq5
	I0829 19:20:48.001301   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:48.001308   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:48.001315   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:48.004148   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:20:48.005069   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:20:48.005084   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:48.005093   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:48.005097   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:48.009467   29935 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 19:20:48.009934   29935 pod_ready.go:93] pod "coredns-6f6b679f8f-bqqq5" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:48.009951   29935 pod_ready.go:82] duration metric: took 8.70618ms for pod "coredns-6f6b679f8f-bqqq5" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:48.009962   29935 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-qjgfg" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:48.010013   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qjgfg
	I0829 19:20:48.010023   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:48.010033   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:48.010042   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:48.012554   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:20:48.013090   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:20:48.013103   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:48.013112   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:48.013117   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:48.016105   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:20:48.016709   29935 pod_ready.go:93] pod "coredns-6f6b679f8f-qjgfg" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:48.016730   29935 pod_ready.go:82] duration metric: took 6.760466ms for pod "coredns-6f6b679f8f-qjgfg" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:48.016742   29935 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:48.016810   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/etcd-ha-505269
	I0829 19:20:48.016820   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:48.016827   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:48.016830   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:48.019390   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:20:48.019901   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:20:48.019917   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:48.019927   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:48.019932   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:48.022379   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:20:48.022873   29935 pod_ready.go:93] pod "etcd-ha-505269" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:48.022891   29935 pod_ready.go:82] duration metric: took 6.141778ms for pod "etcd-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:48.022902   29935 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:48.022959   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/etcd-ha-505269-m02
	I0829 19:20:48.022970   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:48.022980   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:48.022988   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:48.025320   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:20:48.025822   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:20:48.025835   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:48.025844   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:48.025848   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:48.028278   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:20:48.028886   29935 pod_ready.go:93] pod "etcd-ha-505269-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:48.028903   29935 pod_ready.go:82] duration metric: took 5.990484ms for pod "etcd-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:48.028915   29935 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-505269-m03" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:48.182329   29935 request.go:632] Waited for 153.325482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/etcd-ha-505269-m03
	I0829 19:20:48.182397   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/etcd-ha-505269-m03
	I0829 19:20:48.182404   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:48.182412   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:48.182416   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:48.185893   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:48.381959   29935 request.go:632] Waited for 195.278024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:48.382035   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:48.382043   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:48.382054   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:48.382063   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:48.385227   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:48.385761   29935 pod_ready.go:93] pod "etcd-ha-505269-m03" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:48.385777   29935 pod_ready.go:82] duration metric: took 356.852127ms for pod "etcd-ha-505269-m03" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:48.385793   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:48.581962   29935 request.go:632] Waited for 196.112994ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-505269
	I0829 19:20:48.582027   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-505269
	I0829 19:20:48.582035   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:48.582045   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:48.582050   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:48.585745   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:48.782792   29935 request.go:632] Waited for 196.38951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:20:48.782865   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:20:48.782874   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:48.782883   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:48.782888   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:48.786830   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:48.787392   29935 pod_ready.go:93] pod "kube-apiserver-ha-505269" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:48.787418   29935 pod_ready.go:82] duration metric: took 401.617326ms for pod "kube-apiserver-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:48.787431   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:48.982370   29935 request.go:632] Waited for 194.872484ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-505269-m02
	I0829 19:20:48.982439   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-505269-m02
	I0829 19:20:48.982445   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:48.982452   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:48.982456   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:48.985952   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:49.181994   29935 request.go:632] Waited for 195.292396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:20:49.182057   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:20:49.182063   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:49.182073   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:49.182079   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:49.185734   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:49.186279   29935 pod_ready.go:93] pod "kube-apiserver-ha-505269-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:49.186301   29935 pod_ready.go:82] duration metric: took 398.861133ms for pod "kube-apiserver-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:49.186316   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-505269-m03" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:49.382384   29935 request.go:632] Waited for 196.001794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-505269-m03
	I0829 19:20:49.382463   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-505269-m03
	I0829 19:20:49.382470   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:49.382480   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:49.382490   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:49.385360   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:20:49.582503   29935 request.go:632] Waited for 196.233778ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:49.582587   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:49.582596   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:49.582602   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:49.582608   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:49.586046   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:49.586618   29935 pod_ready.go:93] pod "kube-apiserver-ha-505269-m03" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:49.586640   29935 pod_ready.go:82] duration metric: took 400.317248ms for pod "kube-apiserver-ha-505269-m03" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:49.586652   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:49.782739   29935 request.go:632] Waited for 196.011295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-505269
	I0829 19:20:49.782792   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-505269
	I0829 19:20:49.782798   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:49.782806   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:49.782811   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:49.786083   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:49.982229   29935 request.go:632] Waited for 195.413692ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:20:49.982288   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:20:49.982295   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:49.982309   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:49.982324   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:49.985812   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:49.986374   29935 pod_ready.go:93] pod "kube-controller-manager-ha-505269" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:49.986391   29935 pod_ready.go:82] duration metric: took 399.731157ms for pod "kube-controller-manager-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:49.986401   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:50.182741   29935 request.go:632] Waited for 196.282501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-505269-m02
	I0829 19:20:50.182807   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-505269-m02
	I0829 19:20:50.182815   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:50.182826   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:50.182834   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:50.186455   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:50.382576   29935 request.go:632] Waited for 195.348282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:20:50.382664   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:20:50.382677   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:50.382686   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:50.382699   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:50.387644   29935 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 19:20:50.388308   29935 pod_ready.go:93] pod "kube-controller-manager-ha-505269-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:50.388330   29935 pod_ready.go:82] duration metric: took 401.922653ms for pod "kube-controller-manager-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:50.388339   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-505269-m03" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:50.582413   29935 request.go:632] Waited for 194.012728ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-505269-m03
	I0829 19:20:50.582507   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-505269-m03
	I0829 19:20:50.582516   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:50.582524   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:50.582530   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:50.586210   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:50.782734   29935 request.go:632] Waited for 195.349333ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:50.782793   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:50.782801   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:50.782810   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:50.782820   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:50.786170   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:50.787164   29935 pod_ready.go:93] pod "kube-controller-manager-ha-505269-m03" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:50.787188   29935 pod_ready.go:82] duration metric: took 398.842979ms for pod "kube-controller-manager-ha-505269-m03" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:50.787199   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hx822" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:50.982280   29935 request.go:632] Waited for 195.006936ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hx822
	I0829 19:20:50.982333   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hx822
	I0829 19:20:50.982339   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:50.982347   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:50.982351   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:50.985894   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:51.182836   29935 request.go:632] Waited for 196.376293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:20:51.182899   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:20:51.182904   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:51.182911   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:51.182915   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:51.186003   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:51.186628   29935 pod_ready.go:93] pod "kube-proxy-hx822" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:51.186650   29935 pod_ready.go:82] duration metric: took 399.442284ms for pod "kube-proxy-hx822" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:51.186663   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jxbdt" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:51.382672   29935 request.go:632] Waited for 195.919961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jxbdt
	I0829 19:20:51.382733   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jxbdt
	I0829 19:20:51.382738   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:51.382747   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:51.382751   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:51.385879   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:51.581977   29935 request.go:632] Waited for 195.27634ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:20:51.582034   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:20:51.582041   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:51.582049   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:51.582055   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:51.585400   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:51.586012   29935 pod_ready.go:93] pod "kube-proxy-jxbdt" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:51.586033   29935 pod_ready.go:82] duration metric: took 399.362235ms for pod "kube-proxy-jxbdt" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:51.586046   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s6zxk" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:51.782211   29935 request.go:632] Waited for 196.09594ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s6zxk
	I0829 19:20:51.782268   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s6zxk
	I0829 19:20:51.782274   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:51.782282   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:51.782288   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:51.786430   29935 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 19:20:51.982473   29935 request.go:632] Waited for 195.29556ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:51.982549   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:51.982560   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:51.982574   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:51.982584   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:51.985805   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:51.986410   29935 pod_ready.go:93] pod "kube-proxy-s6zxk" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:51.986428   29935 pod_ready.go:82] duration metric: took 400.375683ms for pod "kube-proxy-s6zxk" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:51.986437   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:52.182484   29935 request.go:632] Waited for 195.979328ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-505269
	I0829 19:20:52.182549   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-505269
	I0829 19:20:52.182556   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:52.182566   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:52.182574   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:52.185900   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:52.381961   29935 request.go:632] Waited for 195.503299ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:20:52.382039   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:20:52.382050   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:52.382061   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:52.382070   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:52.385244   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:52.385651   29935 pod_ready.go:93] pod "kube-scheduler-ha-505269" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:52.385669   29935 pod_ready.go:82] duration metric: took 399.226177ms for pod "kube-scheduler-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:52.385678   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:52.582815   29935 request.go:632] Waited for 197.051311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-505269-m02
	I0829 19:20:52.582890   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-505269-m02
	I0829 19:20:52.582898   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:52.582908   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:52.582927   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:52.586478   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:52.782093   29935 request.go:632] Waited for 194.956257ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:20:52.782155   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:20:52.782175   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:52.782188   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:52.782192   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:52.785743   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:52.786454   29935 pod_ready.go:93] pod "kube-scheduler-ha-505269-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:52.786475   29935 pod_ready.go:82] duration metric: took 400.790166ms for pod "kube-scheduler-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:52.786488   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-505269-m03" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:52.982701   29935 request.go:632] Waited for 196.131453ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-505269-m03
	I0829 19:20:52.982775   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-505269-m03
	I0829 19:20:52.982786   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:52.982797   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:52.982807   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:52.986720   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:53.182841   29935 request.go:632] Waited for 195.463735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:53.182893   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:53.182906   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:53.182918   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:53.182931   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:53.186220   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:53.186854   29935 pod_ready.go:93] pod "kube-scheduler-ha-505269-m03" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:53.186873   29935 pod_ready.go:82] duration metric: took 400.378159ms for pod "kube-scheduler-ha-505269-m03" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:53.186887   29935 pod_ready.go:39] duration metric: took 5.200628119s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:20:53.186903   29935 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:20:53.186949   29935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:20:53.202916   29935 api_server.go:72] duration metric: took 23.544810705s to wait for apiserver process to appear ...
	I0829 19:20:53.202938   29935 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:20:53.202954   29935 api_server.go:253] Checking apiserver healthz at https://192.168.39.56:8443/healthz ...
	I0829 19:20:53.207128   29935 api_server.go:279] https://192.168.39.56:8443/healthz returned 200:
	ok
	I0829 19:20:53.207200   29935 round_trippers.go:463] GET https://192.168.39.56:8443/version
	I0829 19:20:53.207211   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:53.207219   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:53.207222   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:53.208011   29935 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0829 19:20:53.208081   29935 api_server.go:141] control plane version: v1.31.0
	I0829 19:20:53.208098   29935 api_server.go:131] duration metric: took 5.15509ms to wait for apiserver health ...
	I0829 19:20:53.208107   29935 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:20:53.382562   29935 request.go:632] Waited for 174.349701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods
	I0829 19:20:53.382626   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods
	I0829 19:20:53.382642   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:53.382653   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:53.382660   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:53.390094   29935 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0829 19:20:53.397222   29935 system_pods.go:59] 24 kube-system pods found
	I0829 19:20:53.397255   29935 system_pods.go:61] "coredns-6f6b679f8f-bqqq5" [801d9cfa-e1ad-4b31-9803-0030543fdc9e] Running
	I0829 19:20:53.397262   29935 system_pods.go:61] "coredns-6f6b679f8f-qjgfg" [12168097-2d3c-467a-b4b5-c0ca7f85e4eb] Running
	I0829 19:20:53.397268   29935 system_pods.go:61] "etcd-ha-505269" [a9cd644c-66f8-419a-be0c-615fc97daf18] Running
	I0829 19:20:53.397274   29935 system_pods.go:61] "etcd-ha-505269-m02" [864d2e94-62a9-4171-87bc-7ec5a3fc6224] Running
	I0829 19:20:53.397279   29935 system_pods.go:61] "etcd-ha-505269-m03" [33af63b3-671f-4404-8499-aea05889ba77] Running
	I0829 19:20:53.397285   29935 system_pods.go:61] "kindnet-7rp6z" [7c922b32-e666-4b00-ab65-505632346112] Running
	I0829 19:20:53.397290   29935 system_pods.go:61] "kindnet-lr2lx" [f12a48e6-faf1-43ea-93bb-21d6526ccd5a] Running
	I0829 19:20:53.397295   29935 system_pods.go:61] "kindnet-sthc8" [3c5a7487-a1b8-4acc-9462-84a2b478f46b] Running
	I0829 19:20:53.397301   29935 system_pods.go:61] "kube-apiserver-ha-505269" [616e3cf5-709a-46a8-8d71-0e709d297ca0] Running
	I0829 19:20:53.397309   29935 system_pods.go:61] "kube-apiserver-ha-505269-m02" [8615f4df-4f47-451a-80c8-d50826a75738] Running
	I0829 19:20:53.397313   29935 system_pods.go:61] "kube-apiserver-ha-505269-m03" [96e976ac-3560-4c87-a5f4-9841ada7162a] Running
	I0829 19:20:53.397320   29935 system_pods.go:61] "kube-controller-manager-ha-505269" [3f81751f-e12f-4a70-a901-db586a66461e] Running
	I0829 19:20:53.397324   29935 system_pods.go:61] "kube-controller-manager-ha-505269-m02" [b0587260-4827-47eb-a3b7-afb5b1fad59b] Running
	I0829 19:20:53.397331   29935 system_pods.go:61] "kube-controller-manager-ha-505269-m03" [ab1975ca-707e-4ac8-9a7e-81f1564b947c] Running
	I0829 19:20:53.397335   29935 system_pods.go:61] "kube-proxy-hx822" [e88a504e-122b-4609-a0cc-4ad3115b3e4e] Running
	I0829 19:20:53.397345   29935 system_pods.go:61] "kube-proxy-jxbdt" [e51729e9-d662-4ea2-9a4f-85f77b269dea] Running
	I0829 19:20:53.397348   29935 system_pods.go:61] "kube-proxy-s6zxk" [77cd7837-5ad2-4775-b909-ea68c0315299] Running
	I0829 19:20:53.397351   29935 system_pods.go:61] "kube-scheduler-ha-505269" [c573cfd8-20ba-46ce-8c0f-b610240ab78d] Running
	I0829 19:20:53.397355   29935 system_pods.go:61] "kube-scheduler-ha-505269-m02" [ba4e7eec-baaa-4c92-84f2-ac50629fea20] Running
	I0829 19:20:53.397358   29935 system_pods.go:61] "kube-scheduler-ha-505269-m03" [1e2254c2-3a7d-42bc-a9ad-669bf55ede4e] Running
	I0829 19:20:53.397361   29935 system_pods.go:61] "kube-vip-ha-505269" [d1734801-9573-45b3-a4a0-9ac45c093b95] Running
	I0829 19:20:53.397364   29935 system_pods.go:61] "kube-vip-ha-505269-m02" [f33d8dab-fb6f-46cf-b508-1e0eae03cad2] Running
	I0829 19:20:53.397367   29935 system_pods.go:61] "kube-vip-ha-505269-m03" [dfc5cf61-552b-42c7-87a1-b311d4dd57b1] Running
	I0829 19:20:53.397369   29935 system_pods.go:61] "storage-provisioner" [6b7cd00a-94da-4e42-b7ae-289aab759c4f] Running
	I0829 19:20:53.397375   29935 system_pods.go:74] duration metric: took 189.259337ms to wait for pod list to return data ...
	I0829 19:20:53.397384   29935 default_sa.go:34] waiting for default service account to be created ...
	I0829 19:20:53.582809   29935 request.go:632] Waited for 185.349391ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/default/serviceaccounts
	I0829 19:20:53.582877   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/default/serviceaccounts
	I0829 19:20:53.582885   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:53.582897   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:53.582908   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:53.586282   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:53.586394   29935 default_sa.go:45] found service account: "default"
	I0829 19:20:53.586411   29935 default_sa.go:55] duration metric: took 189.019647ms for default service account to be created ...
	I0829 19:20:53.586423   29935 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 19:20:53.782598   29935 request.go:632] Waited for 196.100839ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods
	I0829 19:20:53.782654   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods
	I0829 19:20:53.782659   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:53.782666   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:53.782670   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:53.787194   29935 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 19:20:53.793542   29935 system_pods.go:86] 24 kube-system pods found
	I0829 19:20:53.793567   29935 system_pods.go:89] "coredns-6f6b679f8f-bqqq5" [801d9cfa-e1ad-4b31-9803-0030543fdc9e] Running
	I0829 19:20:53.793572   29935 system_pods.go:89] "coredns-6f6b679f8f-qjgfg" [12168097-2d3c-467a-b4b5-c0ca7f85e4eb] Running
	I0829 19:20:53.793576   29935 system_pods.go:89] "etcd-ha-505269" [a9cd644c-66f8-419a-be0c-615fc97daf18] Running
	I0829 19:20:53.793580   29935 system_pods.go:89] "etcd-ha-505269-m02" [864d2e94-62a9-4171-87bc-7ec5a3fc6224] Running
	I0829 19:20:53.793584   29935 system_pods.go:89] "etcd-ha-505269-m03" [33af63b3-671f-4404-8499-aea05889ba77] Running
	I0829 19:20:53.793587   29935 system_pods.go:89] "kindnet-7rp6z" [7c922b32-e666-4b00-ab65-505632346112] Running
	I0829 19:20:53.793590   29935 system_pods.go:89] "kindnet-lr2lx" [f12a48e6-faf1-43ea-93bb-21d6526ccd5a] Running
	I0829 19:20:53.793594   29935 system_pods.go:89] "kindnet-sthc8" [3c5a7487-a1b8-4acc-9462-84a2b478f46b] Running
	I0829 19:20:53.793597   29935 system_pods.go:89] "kube-apiserver-ha-505269" [616e3cf5-709a-46a8-8d71-0e709d297ca0] Running
	I0829 19:20:53.793601   29935 system_pods.go:89] "kube-apiserver-ha-505269-m02" [8615f4df-4f47-451a-80c8-d50826a75738] Running
	I0829 19:20:53.793604   29935 system_pods.go:89] "kube-apiserver-ha-505269-m03" [96e976ac-3560-4c87-a5f4-9841ada7162a] Running
	I0829 19:20:53.793607   29935 system_pods.go:89] "kube-controller-manager-ha-505269" [3f81751f-e12f-4a70-a901-db586a66461e] Running
	I0829 19:20:53.793610   29935 system_pods.go:89] "kube-controller-manager-ha-505269-m02" [b0587260-4827-47eb-a3b7-afb5b1fad59b] Running
	I0829 19:20:53.793615   29935 system_pods.go:89] "kube-controller-manager-ha-505269-m03" [ab1975ca-707e-4ac8-9a7e-81f1564b947c] Running
	I0829 19:20:53.793619   29935 system_pods.go:89] "kube-proxy-hx822" [e88a504e-122b-4609-a0cc-4ad3115b3e4e] Running
	I0829 19:20:53.793623   29935 system_pods.go:89] "kube-proxy-jxbdt" [e51729e9-d662-4ea2-9a4f-85f77b269dea] Running
	I0829 19:20:53.793629   29935 system_pods.go:89] "kube-proxy-s6zxk" [77cd7837-5ad2-4775-b909-ea68c0315299] Running
	I0829 19:20:53.793632   29935 system_pods.go:89] "kube-scheduler-ha-505269" [c573cfd8-20ba-46ce-8c0f-b610240ab78d] Running
	I0829 19:20:53.793636   29935 system_pods.go:89] "kube-scheduler-ha-505269-m02" [ba4e7eec-baaa-4c92-84f2-ac50629fea20] Running
	I0829 19:20:53.793639   29935 system_pods.go:89] "kube-scheduler-ha-505269-m03" [1e2254c2-3a7d-42bc-a9ad-669bf55ede4e] Running
	I0829 19:20:53.793645   29935 system_pods.go:89] "kube-vip-ha-505269" [d1734801-9573-45b3-a4a0-9ac45c093b95] Running
	I0829 19:20:53.793649   29935 system_pods.go:89] "kube-vip-ha-505269-m02" [f33d8dab-fb6f-46cf-b508-1e0eae03cad2] Running
	I0829 19:20:53.793657   29935 system_pods.go:89] "kube-vip-ha-505269-m03" [dfc5cf61-552b-42c7-87a1-b311d4dd57b1] Running
	I0829 19:20:53.793662   29935 system_pods.go:89] "storage-provisioner" [6b7cd00a-94da-4e42-b7ae-289aab759c4f] Running
	I0829 19:20:53.793671   29935 system_pods.go:126] duration metric: took 207.240387ms to wait for k8s-apps to be running ...
	I0829 19:20:53.793680   29935 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 19:20:53.793721   29935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:20:53.808978   29935 system_svc.go:56] duration metric: took 15.288575ms WaitForService to wait for kubelet
	I0829 19:20:53.809005   29935 kubeadm.go:582] duration metric: took 24.150901223s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:20:53.809022   29935 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:20:53.982419   29935 request.go:632] Waited for 173.320157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes
	I0829 19:20:53.982498   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes
	I0829 19:20:53.982504   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:53.982512   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:53.982517   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:53.986093   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:53.988888   29935 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:20:53.988913   29935 node_conditions.go:123] node cpu capacity is 2
	I0829 19:20:53.988924   29935 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:20:53.988928   29935 node_conditions.go:123] node cpu capacity is 2
	I0829 19:20:53.988931   29935 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:20:53.988934   29935 node_conditions.go:123] node cpu capacity is 2
	I0829 19:20:53.988938   29935 node_conditions.go:105] duration metric: took 179.911843ms to run NodePressure ...
	I0829 19:20:53.988948   29935 start.go:241] waiting for startup goroutines ...
	I0829 19:20:53.988965   29935 start.go:255] writing updated cluster config ...
	I0829 19:20:53.989220   29935 ssh_runner.go:195] Run: rm -f paused
	I0829 19:20:54.039742   29935 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 19:20:54.041943   29935 out.go:177] * Done! kubectl is now configured to use "ha-505269" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 29 19:24:30 ha-505269 crio[665]: time="2024-08-29 19:24:30.281726825Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c7b4e4d3-a85b-4582-984f-22d37ed00f96 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:24:30 ha-505269 crio[665]: time="2024-08-29 19:24:30.282672798Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=09e91cd4-e964-4af8-97bf-070718f57cac name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:24:30 ha-505269 crio[665]: time="2024-08-29 19:24:30.283196940Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959470283170864,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=09e91cd4-e964-4af8-97bf-070718f57cac name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:24:30 ha-505269 crio[665]: time="2024-08-29 19:24:30.283764037Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f0f5402-ccfb-40c0-a5fa-b6a396b6bd4f name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:24:30 ha-505269 crio[665]: time="2024-08-29 19:24:30.283831578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f0f5402-ccfb-40c0-a5fa-b6a396b6bd4f name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:24:30 ha-505269 crio[665]: time="2024-08-29 19:24:30.284159345Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ed600112468d9762d0ad5c0e554ca486c5cfc592114271d25cd84f5c09187db,PodSandboxId:02692297ba9f1af58d950a27f62a82af78a2a057750241400caef23a8bf2b2b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724959256520731552,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-psss7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69c11597-6cac-437a-9860-fc1a66cdc304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29d7e6c72fdaaad5da65548ec44c18b60db3921c4ad2b63c9d767f1cc2c3fa75,PodSandboxId:f43b8211e2c7357fd7c4a4db43182c72f928a273075ca6fe8b80aedc84c67fac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959112076509057,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qjgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12168097-2d3c-467a-b4b5-c0ca7f85e4eb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc1a33f68ce7f972b7d1dd4dc36ce496d6339b1dc6a351d35cbb255ed61e8bd,PodSandboxId:6b5276c7cbe294143915bdc30d76458310de02fd916991c7782efc2e33f3190b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959112071160839,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bqqq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
801d9cfa-e1ad-4b31-9803-0030543fdc9e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccbac4cefd2d1e052f3d856dea541761ee436725097d33698eed254a59c810fe,PodSandboxId:2c5d2aad5519947556e7e1a184c260499dd950c4ebd176a2189e8fc06fa32cfa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724959111958933793,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7cd00a-94da-4e42-b7ae-289aab759c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5e9dd792be09a6e1909afd5dca42b303bac8476ef04f3254480b0f21ac53604,PodSandboxId:1f6e4f500f959ede186c4a9f85bff38a45872b813a20d08db40395e528d840a1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724959100077619706,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7rp6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c922b32-e666-4b00-ab65-505632346112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0cc96d9477c950d00d6920b1c4466bfd2e75dc0980383cdf073b75f577c37c,PodSandboxId:303aabbca3328ef64356c8322b3e76f6f3d1d35af5f892a4bec46277b7c9dd3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172495909
7303115391,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88a504e-122b-4609-a0cc-4ad3115b3e4e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:066b1cbdd3861a8cf28333b764086e47ace1987acec778ff2b2d0aa1973af37a,PodSandboxId:b5a438045598a2d267d39e01f1e41df597c13d1ad48368887838f09efcda52ac,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172495908828
3659292,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa47351bb1c351808ace9dd407df7743,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52fd2d668a925e55c334400b5d61dde55232d771983a9073e97f0ff37a6062fc,PodSandboxId:9d45b7206f46e64c0d5913c20a97148e0ac19d155b7ce7d3e371c593e763c4d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724959085808486991,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2658d81c7919220d900309ffd29970c4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:960e616b3c0582c8ccb0ec4fea8d02c0e2bfd2d0f81a273ac203cbf5eb6e4d6c,PodSandboxId:ac333ce918ddeec2a3499825104982d2e3230fd22b85b6e99bace076fdf6e1dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724959085742080078,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840e9d9d59afee1514ac6551d154c955,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1f91ce133bedd34081facf255eed45036eacfd938fbf58545d302a01d638dc0,PodSandboxId:8cfa0a246feb47d168af1872d1544f0b93affe0730c14516b4367b835d78d328,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724959085672946345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,
io.kubernetes.pod.name: kube-scheduler-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceba94f2170a08ee5a3d92beb3c9ffca,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b2531e5990db96ea59b71af6b9036995e91f182477ba5d381b96877d38e296,PodSandboxId:0e21f73ac8e2264b4a929fcd33f71b1ffda374ae03b0bd600fb8c7a44c8bef74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724959085658194261,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82b78fe84e206c02eb995f9d886b23c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f0f5402-ccfb-40c0-a5fa-b6a396b6bd4f name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:24:30 ha-505269 crio[665]: time="2024-08-29 19:24:30.322634598Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4e5f3e1a-eef3-4e75-9e43-9768bcdbcf18 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:24:30 ha-505269 crio[665]: time="2024-08-29 19:24:30.322709217Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4e5f3e1a-eef3-4e75-9e43-9768bcdbcf18 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:24:30 ha-505269 crio[665]: time="2024-08-29 19:24:30.323875980Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b88b0eff-5523-452f-9445-febf6a9e09e6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:24:30 ha-505269 crio[665]: time="2024-08-29 19:24:30.324526397Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959470324501598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b88b0eff-5523-452f-9445-febf6a9e09e6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:24:30 ha-505269 crio[665]: time="2024-08-29 19:24:30.325212787Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7dc3f541-27e5-4630-9d65-c07894fd3197 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:24:30 ha-505269 crio[665]: time="2024-08-29 19:24:30.325265567Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7dc3f541-27e5-4630-9d65-c07894fd3197 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:24:30 ha-505269 crio[665]: time="2024-08-29 19:24:30.325492838Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ed600112468d9762d0ad5c0e554ca486c5cfc592114271d25cd84f5c09187db,PodSandboxId:02692297ba9f1af58d950a27f62a82af78a2a057750241400caef23a8bf2b2b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724959256520731552,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-psss7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69c11597-6cac-437a-9860-fc1a66cdc304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29d7e6c72fdaaad5da65548ec44c18b60db3921c4ad2b63c9d767f1cc2c3fa75,PodSandboxId:f43b8211e2c7357fd7c4a4db43182c72f928a273075ca6fe8b80aedc84c67fac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959112076509057,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qjgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12168097-2d3c-467a-b4b5-c0ca7f85e4eb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc1a33f68ce7f972b7d1dd4dc36ce496d6339b1dc6a351d35cbb255ed61e8bd,PodSandboxId:6b5276c7cbe294143915bdc30d76458310de02fd916991c7782efc2e33f3190b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959112071160839,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bqqq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
801d9cfa-e1ad-4b31-9803-0030543fdc9e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccbac4cefd2d1e052f3d856dea541761ee436725097d33698eed254a59c810fe,PodSandboxId:2c5d2aad5519947556e7e1a184c260499dd950c4ebd176a2189e8fc06fa32cfa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724959111958933793,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7cd00a-94da-4e42-b7ae-289aab759c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5e9dd792be09a6e1909afd5dca42b303bac8476ef04f3254480b0f21ac53604,PodSandboxId:1f6e4f500f959ede186c4a9f85bff38a45872b813a20d08db40395e528d840a1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724959100077619706,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7rp6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c922b32-e666-4b00-ab65-505632346112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0cc96d9477c950d00d6920b1c4466bfd2e75dc0980383cdf073b75f577c37c,PodSandboxId:303aabbca3328ef64356c8322b3e76f6f3d1d35af5f892a4bec46277b7c9dd3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172495909
7303115391,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88a504e-122b-4609-a0cc-4ad3115b3e4e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:066b1cbdd3861a8cf28333b764086e47ace1987acec778ff2b2d0aa1973af37a,PodSandboxId:b5a438045598a2d267d39e01f1e41df597c13d1ad48368887838f09efcda52ac,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172495908828
3659292,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa47351bb1c351808ace9dd407df7743,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52fd2d668a925e55c334400b5d61dde55232d771983a9073e97f0ff37a6062fc,PodSandboxId:9d45b7206f46e64c0d5913c20a97148e0ac19d155b7ce7d3e371c593e763c4d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724959085808486991,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2658d81c7919220d900309ffd29970c4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:960e616b3c0582c8ccb0ec4fea8d02c0e2bfd2d0f81a273ac203cbf5eb6e4d6c,PodSandboxId:ac333ce918ddeec2a3499825104982d2e3230fd22b85b6e99bace076fdf6e1dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724959085742080078,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840e9d9d59afee1514ac6551d154c955,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1f91ce133bedd34081facf255eed45036eacfd938fbf58545d302a01d638dc0,PodSandboxId:8cfa0a246feb47d168af1872d1544f0b93affe0730c14516b4367b835d78d328,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724959085672946345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,
io.kubernetes.pod.name: kube-scheduler-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceba94f2170a08ee5a3d92beb3c9ffca,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b2531e5990db96ea59b71af6b9036995e91f182477ba5d381b96877d38e296,PodSandboxId:0e21f73ac8e2264b4a929fcd33f71b1ffda374ae03b0bd600fb8c7a44c8bef74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724959085658194261,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82b78fe84e206c02eb995f9d886b23c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7dc3f541-27e5-4630-9d65-c07894fd3197 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:24:30 ha-505269 crio[665]: time="2024-08-29 19:24:30.350143086Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05c145e3-b969-4ce1-b081-880602af7447 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 29 19:24:30 ha-505269 crio[665]: time="2024-08-29 19:24:30.350411652Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:02692297ba9f1af58d950a27f62a82af78a2a057750241400caef23a8bf2b2b4,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-psss7,Uid:69c11597-6cac-437a-9860-fc1a66cdc304,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724959255269093474,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-psss7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69c11597-6cac-437a-9860-fc1a66cdc304,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:20:54.949336771Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2c5d2aad5519947556e7e1a184c260499dd950c4ebd176a2189e8fc06fa32cfa,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:6b7cd00a-94da-4e42-b7ae-289aab759c4f,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1724959111789428855,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7cd00a-94da-4e42-b7ae-289aab759c4f,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-29T19:18:31.473634295Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f43b8211e2c7357fd7c4a4db43182c72f928a273075ca6fe8b80aedc84c67fac,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-qjgfg,Uid:12168097-2d3c-467a-b4b5-c0ca7f85e4eb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724959111782335464,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-qjgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12168097-2d3c-467a-b4b5-c0ca7f85e4eb,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:18:31.471036509Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6b5276c7cbe294143915bdc30d76458310de02fd916991c7782efc2e33f3190b,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-bqqq5,Uid:801d9cfa-e1ad-4b31-9803-0030543fdc9e,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1724959111772398657,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-bqqq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801d9cfa-e1ad-4b31-9803-0030543fdc9e,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:18:31.464611802Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:303aabbca3328ef64356c8322b3e76f6f3d1d35af5f892a4bec46277b7c9dd3f,Metadata:&PodSandboxMetadata{Name:kube-proxy-hx822,Uid:e88a504e-122b-4609-a0cc-4ad3115b3e4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724959097198560581,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-hx822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88a504e-122b-4609-a0cc-4ad3115b3e4e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-08-29T19:18:16.877349396Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1f6e4f500f959ede186c4a9f85bff38a45872b813a20d08db40395e528d840a1,Metadata:&PodSandboxMetadata{Name:kindnet-7rp6z,Uid:7c922b32-e666-4b00-ab65-505632346112,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724959097191816001,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-7rp6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c922b32-e666-4b00-ab65-505632346112,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:18:16.875685703Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0e21f73ac8e2264b4a929fcd33f71b1ffda374ae03b0bd600fb8c7a44c8bef74,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-505269,Uid:d82b78fe84e206c02eb995f9d886b23c,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1724959085483628539,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82b78fe84e206c02eb995f9d886b23c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.56:8443,kubernetes.io/config.hash: d82b78fe84e206c02eb995f9d886b23c,kubernetes.io/config.seen: 2024-08-29T19:18:04.402568147Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9d45b7206f46e64c0d5913c20a97148e0ac19d155b7ce7d3e371c593e763c4d7,Metadata:&PodSandboxMetadata{Name:etcd-ha-505269,Uid:2658d81c7919220d900309ffd29970c4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724959085474793507,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2658
d81c7919220d900309ffd29970c4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.56:2379,kubernetes.io/config.hash: 2658d81c7919220d900309ffd29970c4,kubernetes.io/config.seen: 2024-08-29T19:18:04.402563696Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8cfa0a246feb47d168af1872d1544f0b93affe0730c14516b4367b835d78d328,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-505269,Uid:ceba94f2170a08ee5a3d92beb3c9ffca,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724959085464800201,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceba94f2170a08ee5a3d92beb3c9ffca,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ceba94f2170a08ee5a3d92beb3c9ffca,kubernetes.io/config.seen: 2024-08-29T19:18:04.402570367Z,kubernetes.io/config.source: file
,},RuntimeHandler:,},&PodSandbox{Id:b5a438045598a2d267d39e01f1e41df597c13d1ad48368887838f09efcda52ac,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-505269,Uid:aa47351bb1c351808ace9dd407df7743,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724959085460961550,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa47351bb1c351808ace9dd407df7743,},Annotations:map[string]string{kubernetes.io/config.hash: aa47351bb1c351808ace9dd407df7743,kubernetes.io/config.seen: 2024-08-29T19:18:04.402571045Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ac333ce918ddeec2a3499825104982d2e3230fd22b85b6e99bace076fdf6e1dd,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-505269,Uid:840e9d9d59afee1514ac6551d154c955,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724959085454295081,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.con
tainer.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840e9d9d59afee1514ac6551d154c955,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 840e9d9d59afee1514ac6551d154c955,kubernetes.io/config.seen: 2024-08-29T19:18:04.402569371Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=05c145e3-b969-4ce1-b081-880602af7447 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 29 19:24:30 ha-505269 crio[665]: time="2024-08-29 19:24:30.351146949Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=edcca953-ef43-4ee1-9319-7363c56b5905 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:24:30 ha-505269 crio[665]: time="2024-08-29 19:24:30.351209075Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=edcca953-ef43-4ee1-9319-7363c56b5905 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:24:30 ha-505269 crio[665]: time="2024-08-29 19:24:30.352241016Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ed600112468d9762d0ad5c0e554ca486c5cfc592114271d25cd84f5c09187db,PodSandboxId:02692297ba9f1af58d950a27f62a82af78a2a057750241400caef23a8bf2b2b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724959256520731552,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-psss7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69c11597-6cac-437a-9860-fc1a66cdc304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29d7e6c72fdaaad5da65548ec44c18b60db3921c4ad2b63c9d767f1cc2c3fa75,PodSandboxId:f43b8211e2c7357fd7c4a4db43182c72f928a273075ca6fe8b80aedc84c67fac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959112076509057,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qjgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12168097-2d3c-467a-b4b5-c0ca7f85e4eb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc1a33f68ce7f972b7d1dd4dc36ce496d6339b1dc6a351d35cbb255ed61e8bd,PodSandboxId:6b5276c7cbe294143915bdc30d76458310de02fd916991c7782efc2e33f3190b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959112071160839,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bqqq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
801d9cfa-e1ad-4b31-9803-0030543fdc9e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccbac4cefd2d1e052f3d856dea541761ee436725097d33698eed254a59c810fe,PodSandboxId:2c5d2aad5519947556e7e1a184c260499dd950c4ebd176a2189e8fc06fa32cfa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724959111958933793,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7cd00a-94da-4e42-b7ae-289aab759c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5e9dd792be09a6e1909afd5dca42b303bac8476ef04f3254480b0f21ac53604,PodSandboxId:1f6e4f500f959ede186c4a9f85bff38a45872b813a20d08db40395e528d840a1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724959100077619706,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7rp6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c922b32-e666-4b00-ab65-505632346112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0cc96d9477c950d00d6920b1c4466bfd2e75dc0980383cdf073b75f577c37c,PodSandboxId:303aabbca3328ef64356c8322b3e76f6f3d1d35af5f892a4bec46277b7c9dd3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172495909
7303115391,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88a504e-122b-4609-a0cc-4ad3115b3e4e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:066b1cbdd3861a8cf28333b764086e47ace1987acec778ff2b2d0aa1973af37a,PodSandboxId:b5a438045598a2d267d39e01f1e41df597c13d1ad48368887838f09efcda52ac,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172495908828
3659292,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa47351bb1c351808ace9dd407df7743,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52fd2d668a925e55c334400b5d61dde55232d771983a9073e97f0ff37a6062fc,PodSandboxId:9d45b7206f46e64c0d5913c20a97148e0ac19d155b7ce7d3e371c593e763c4d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724959085808486991,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2658d81c7919220d900309ffd29970c4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:960e616b3c0582c8ccb0ec4fea8d02c0e2bfd2d0f81a273ac203cbf5eb6e4d6c,PodSandboxId:ac333ce918ddeec2a3499825104982d2e3230fd22b85b6e99bace076fdf6e1dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724959085742080078,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840e9d9d59afee1514ac6551d154c955,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1f91ce133bedd34081facf255eed45036eacfd938fbf58545d302a01d638dc0,PodSandboxId:8cfa0a246feb47d168af1872d1544f0b93affe0730c14516b4367b835d78d328,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724959085672946345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,
io.kubernetes.pod.name: kube-scheduler-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceba94f2170a08ee5a3d92beb3c9ffca,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b2531e5990db96ea59b71af6b9036995e91f182477ba5d381b96877d38e296,PodSandboxId:0e21f73ac8e2264b4a929fcd33f71b1ffda374ae03b0bd600fb8c7a44c8bef74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724959085658194261,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82b78fe84e206c02eb995f9d886b23c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=edcca953-ef43-4ee1-9319-7363c56b5905 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:24:30 ha-505269 crio[665]: time="2024-08-29 19:24:30.364554400Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5c86a670-807c-4fba-9468-458c9d0ee066 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:24:30 ha-505269 crio[665]: time="2024-08-29 19:24:30.364615995Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5c86a670-807c-4fba-9468-458c9d0ee066 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:24:30 ha-505269 crio[665]: time="2024-08-29 19:24:30.365714457Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1549cd79-54fe-4b74-936a-58a748abbf9c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:24:30 ha-505269 crio[665]: time="2024-08-29 19:24:30.366288182Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959470366266173,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1549cd79-54fe-4b74-936a-58a748abbf9c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:24:30 ha-505269 crio[665]: time="2024-08-29 19:24:30.366754607Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3570a14-4b65-4270-91c6-fe1e473a254a name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:24:30 ha-505269 crio[665]: time="2024-08-29 19:24:30.366801754Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3570a14-4b65-4270-91c6-fe1e473a254a name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:24:30 ha-505269 crio[665]: time="2024-08-29 19:24:30.367069583Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ed600112468d9762d0ad5c0e554ca486c5cfc592114271d25cd84f5c09187db,PodSandboxId:02692297ba9f1af58d950a27f62a82af78a2a057750241400caef23a8bf2b2b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724959256520731552,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-psss7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69c11597-6cac-437a-9860-fc1a66cdc304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29d7e6c72fdaaad5da65548ec44c18b60db3921c4ad2b63c9d767f1cc2c3fa75,PodSandboxId:f43b8211e2c7357fd7c4a4db43182c72f928a273075ca6fe8b80aedc84c67fac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959112076509057,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qjgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12168097-2d3c-467a-b4b5-c0ca7f85e4eb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc1a33f68ce7f972b7d1dd4dc36ce496d6339b1dc6a351d35cbb255ed61e8bd,PodSandboxId:6b5276c7cbe294143915bdc30d76458310de02fd916991c7782efc2e33f3190b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959112071160839,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bqqq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
801d9cfa-e1ad-4b31-9803-0030543fdc9e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccbac4cefd2d1e052f3d856dea541761ee436725097d33698eed254a59c810fe,PodSandboxId:2c5d2aad5519947556e7e1a184c260499dd950c4ebd176a2189e8fc06fa32cfa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724959111958933793,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7cd00a-94da-4e42-b7ae-289aab759c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5e9dd792be09a6e1909afd5dca42b303bac8476ef04f3254480b0f21ac53604,PodSandboxId:1f6e4f500f959ede186c4a9f85bff38a45872b813a20d08db40395e528d840a1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724959100077619706,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7rp6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c922b32-e666-4b00-ab65-505632346112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0cc96d9477c950d00d6920b1c4466bfd2e75dc0980383cdf073b75f577c37c,PodSandboxId:303aabbca3328ef64356c8322b3e76f6f3d1d35af5f892a4bec46277b7c9dd3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172495909
7303115391,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88a504e-122b-4609-a0cc-4ad3115b3e4e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:066b1cbdd3861a8cf28333b764086e47ace1987acec778ff2b2d0aa1973af37a,PodSandboxId:b5a438045598a2d267d39e01f1e41df597c13d1ad48368887838f09efcda52ac,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172495908828
3659292,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa47351bb1c351808ace9dd407df7743,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52fd2d668a925e55c334400b5d61dde55232d771983a9073e97f0ff37a6062fc,PodSandboxId:9d45b7206f46e64c0d5913c20a97148e0ac19d155b7ce7d3e371c593e763c4d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724959085808486991,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2658d81c7919220d900309ffd29970c4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:960e616b3c0582c8ccb0ec4fea8d02c0e2bfd2d0f81a273ac203cbf5eb6e4d6c,PodSandboxId:ac333ce918ddeec2a3499825104982d2e3230fd22b85b6e99bace076fdf6e1dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724959085742080078,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840e9d9d59afee1514ac6551d154c955,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1f91ce133bedd34081facf255eed45036eacfd938fbf58545d302a01d638dc0,PodSandboxId:8cfa0a246feb47d168af1872d1544f0b93affe0730c14516b4367b835d78d328,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724959085672946345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,
io.kubernetes.pod.name: kube-scheduler-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceba94f2170a08ee5a3d92beb3c9ffca,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b2531e5990db96ea59b71af6b9036995e91f182477ba5d381b96877d38e296,PodSandboxId:0e21f73ac8e2264b4a929fcd33f71b1ffda374ae03b0bd600fb8c7a44c8bef74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724959085658194261,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82b78fe84e206c02eb995f9d886b23c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e3570a14-4b65-4270-91c6-fe1e473a254a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7ed600112468d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   02692297ba9f1       busybox-7dff88458-psss7
	29d7e6c72fdaa       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   f43b8211e2c73       coredns-6f6b679f8f-qjgfg
	1bc1a33f68ce7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   6b5276c7cbe29       coredns-6f6b679f8f-bqqq5
	ccbac4cefd2d1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   2c5d2aad55199       storage-provisioner
	f5e9dd792be09       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    6 minutes ago       Running             kindnet-cni               0                   1f6e4f500f959       kindnet-7rp6z
	9b0cc96d9477c       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      6 minutes ago       Running             kube-proxy                0                   303aabbca3328       kube-proxy-hx822
	066b1cbdd3861       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   b5a438045598a       kube-vip-ha-505269
	52fd2d668a925       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   9d45b7206f46e       etcd-ha-505269
	960e616b3c058       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      6 minutes ago       Running             kube-controller-manager   0                   ac333ce918dde       kube-controller-manager-ha-505269
	d1f91ce133bed       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      6 minutes ago       Running             kube-scheduler            0                   8cfa0a246feb4       kube-scheduler-ha-505269
	65b2531e5990d       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      6 minutes ago       Running             kube-apiserver            0                   0e21f73ac8e22       kube-apiserver-ha-505269
	
	
	==> coredns [1bc1a33f68ce7f972b7d1dd4dc36ce496d6339b1dc6a351d35cbb255ed61e8bd] <==
	[INFO] 10.244.2.2:51225 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000283224s
	[INFO] 10.244.2.2:42081 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.040666885s
	[INFO] 10.244.2.2:56495 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000200893s
	[INFO] 10.244.1.2:37640 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000114272s
	[INFO] 10.244.1.2:53661 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000151146s
	[INFO] 10.244.1.2:33472 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001240454s
	[INFO] 10.244.1.2:57944 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155411s
	[INFO] 10.244.0.4:39369 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096421s
	[INFO] 10.244.0.4:46246 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077614s
	[INFO] 10.244.0.4:49913 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000073523s
	[INFO] 10.244.0.4:48970 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00128236s
	[INFO] 10.244.0.4:55431 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000054415s
	[INFO] 10.244.0.4:54011 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000100096s
	[INFO] 10.244.0.4:57804 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008517s
	[INFO] 10.244.2.2:41131 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117965s
	[INFO] 10.244.1.2:45186 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106338s
	[INFO] 10.244.1.2:55754 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090657s
	[INFO] 10.244.0.4:56674 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071901s
	[INFO] 10.244.2.2:38366 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108693s
	[INFO] 10.244.2.2:46323 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000210179s
	[INFO] 10.244.1.2:45861 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134603s
	[INFO] 10.244.1.2:56113 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000085692s
	[INFO] 10.244.1.2:56364 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124593s
	[INFO] 10.244.1.2:47826 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121887s
	[INFO] 10.244.0.4:45102 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000150401s
	
	
	==> coredns [29d7e6c72fdaaad5da65548ec44c18b60db3921c4ad2b63c9d767f1cc2c3fa75] <==
	[INFO] 10.244.0.4:54551 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000109037s
	[INFO] 10.244.0.4:40210 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001965246s
	[INFO] 10.244.2.2:39736 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000182353s
	[INFO] 10.244.2.2:34550 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003442197s
	[INFO] 10.244.2.2:57439 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000145221s
	[INFO] 10.244.2.2:51088 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000167632s
	[INFO] 10.244.2.2:52731 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000113512s
	[INFO] 10.244.1.2:53021 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120423s
	[INFO] 10.244.1.2:34110 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001929986s
	[INFO] 10.244.1.2:43142 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092944s
	[INFO] 10.244.1.2:53648 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107032s
	[INFO] 10.244.0.4:57451 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001845348s
	[INFO] 10.244.2.2:52124 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000171457s
	[INFO] 10.244.2.2:35561 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076513s
	[INFO] 10.244.2.2:43265 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081638s
	[INFO] 10.244.1.2:37225 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147344s
	[INFO] 10.244.1.2:48252 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148007s
	[INFO] 10.244.0.4:60295 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013086s
	[INFO] 10.244.0.4:48577 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072897s
	[INFO] 10.244.0.4:48965 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087209s
	[INFO] 10.244.2.2:54597 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00016109s
	[INFO] 10.244.2.2:38187 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000150915s
	[INFO] 10.244.0.4:36462 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093452s
	[INFO] 10.244.0.4:43748 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071292s
	[INFO] 10.244.0.4:55783 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000059972s
	
	
	==> describe nodes <==
	Name:               ha-505269
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-505269
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=ha-505269
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T19_18_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:18:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-505269
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:24:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:21:18 +0000   Thu, 29 Aug 2024 19:18:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:21:18 +0000   Thu, 29 Aug 2024 19:18:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:21:18 +0000   Thu, 29 Aug 2024 19:18:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:21:18 +0000   Thu, 29 Aug 2024 19:18:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.56
	  Hostname:    ha-505269
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fddeecce7ac74aa7bff3cef388a156b1
	  System UUID:                fddeecce-7ac7-4aa7-bff3-cef388a156b1
	  Boot ID:                    1446f3e5-6319-4e2f-82e2-8ba9409f038f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-psss7              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m36s
	  kube-system                 coredns-6f6b679f8f-bqqq5             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m14s
	  kube-system                 coredns-6f6b679f8f-qjgfg             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m14s
	  kube-system                 etcd-ha-505269                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m16s
	  kube-system                 kindnet-7rp6z                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m14s
	  kube-system                 kube-apiserver-ha-505269             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-controller-manager-ha-505269    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-proxy-hx822                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-scheduler-ha-505269             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-vip-ha-505269                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m13s                  kube-proxy       
	  Normal  Starting                 6m26s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     6m25s (x7 over 6m26s)  kubelet          Node ha-505269 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m25s (x8 over 6m26s)  kubelet          Node ha-505269 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m25s (x8 over 6m26s)  kubelet          Node ha-505269 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m16s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m16s                  kubelet          Node ha-505269 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m16s                  kubelet          Node ha-505269 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m16s                  kubelet          Node ha-505269 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m15s                  node-controller  Node ha-505269 event: Registered Node ha-505269 in Controller
	  Normal  NodeReady                5m59s                  kubelet          Node ha-505269 status is now: NodeReady
	  Normal  RegisteredNode           5m13s                  node-controller  Node ha-505269 event: Registered Node ha-505269 in Controller
	  Normal  RegisteredNode           3m56s                  node-controller  Node ha-505269 event: Registered Node ha-505269 in Controller
	
	
	Name:               ha-505269-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-505269-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=ha-505269
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_29T19_19_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:19:09 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-505269-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:22:02 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 29 Aug 2024 19:21:12 +0000   Thu, 29 Aug 2024 19:22:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 29 Aug 2024 19:21:12 +0000   Thu, 29 Aug 2024 19:22:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 29 Aug 2024 19:21:12 +0000   Thu, 29 Aug 2024 19:22:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 29 Aug 2024 19:21:12 +0000   Thu, 29 Aug 2024 19:22:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-505269-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc422cc060b34981a3c71775f3af90fa
	  System UUID:                dc422cc0-60b3-4981-a3c7-1775f3af90fa
	  Boot ID:                    b7d47e7c-23e6-4f3e-94a2-225e21964c8c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hcgzg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m36s
	  kube-system                 etcd-ha-505269-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m19s
	  kube-system                 kindnet-sthc8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m21s
	  kube-system                 kube-apiserver-ha-505269-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-controller-manager-ha-505269-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-proxy-jxbdt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-scheduler-ha-505269-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 kube-vip-ha-505269-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m17s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m21s (x8 over 5m21s)  kubelet          Node ha-505269-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m21s (x8 over 5m21s)  kubelet          Node ha-505269-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m21s (x7 over 5m21s)  kubelet          Node ha-505269-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m20s                  node-controller  Node ha-505269-m02 event: Registered Node ha-505269-m02 in Controller
	  Normal  RegisteredNode           5m13s                  node-controller  Node ha-505269-m02 event: Registered Node ha-505269-m02 in Controller
	  Normal  RegisteredNode           3m56s                  node-controller  Node ha-505269-m02 event: Registered Node ha-505269-m02 in Controller
	  Normal  NodeNotReady             106s                   node-controller  Node ha-505269-m02 status is now: NodeNotReady
	
	
	Name:               ha-505269-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-505269-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=ha-505269
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_29T19_20_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:20:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-505269-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:24:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:21:27 +0000   Thu, 29 Aug 2024 19:20:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:21:27 +0000   Thu, 29 Aug 2024 19:20:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:21:27 +0000   Thu, 29 Aug 2024 19:20:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:21:27 +0000   Thu, 29 Aug 2024 19:20:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.178
	  Hostname:    ha-505269-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7fc042d3e84d419187ce4fd6ad6a07e3
	  System UUID:                7fc042d3-e84d-4191-87ce-4fd6ad6a07e3
	  Boot ID:                    cd9654bc-a3a1-4a00-b205-32dcc7ba1371
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2fh45                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m36s
	  kube-system                 etcd-ha-505269-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m3s
	  kube-system                 kindnet-lr2lx                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m5s
	  kube-system                 kube-apiserver-ha-505269-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-controller-manager-ha-505269-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-proxy-s6zxk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-scheduler-ha-505269-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-vip-ha-505269-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m                   kube-proxy       
	  Normal  RegisteredNode           4m5s                 node-controller  Node ha-505269-m03 event: Registered Node ha-505269-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m5s (x8 over 4m5s)  kubelet          Node ha-505269-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m5s (x8 over 4m5s)  kubelet          Node ha-505269-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m5s (x7 over 4m5s)  kubelet          Node ha-505269-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-505269-m03 event: Registered Node ha-505269-m03 in Controller
	  Normal  RegisteredNode           3m56s                node-controller  Node ha-505269-m03 event: Registered Node ha-505269-m03 in Controller
	
	
	Name:               ha-505269-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-505269-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=ha-505269
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_29T19_21_30_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:21:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-505269-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:24:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:22:00 +0000   Thu, 29 Aug 2024 19:21:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:22:00 +0000   Thu, 29 Aug 2024 19:21:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:22:00 +0000   Thu, 29 Aug 2024 19:21:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:22:00 +0000   Thu, 29 Aug 2024 19:21:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    ha-505269-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a48c94cc9aca47538967ceed34ba2fed
	  System UUID:                a48c94cc-9aca-4753-8967-ceed34ba2fed
	  Boot ID:                    94ee343a-8690-48fe-a78a-9f0eed928227
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5lkbf       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m
	  kube-system                 kube-proxy-b5p66    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 2m55s            kube-proxy       
	  Normal  NodeHasSufficientMemory  3m (x2 over 3m)  kubelet          Node ha-505269-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x2 over 3m)  kubelet          Node ha-505269-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x2 over 3m)  kubelet          Node ha-505269-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m59s            node-controller  Node ha-505269-m04 event: Registered Node ha-505269-m04 in Controller
	  Normal  RegisteredNode           2m57s            node-controller  Node ha-505269-m04 event: Registered Node ha-505269-m04 in Controller
	  Normal  RegisteredNode           2m56s            node-controller  Node ha-505269-m04 event: Registered Node ha-505269-m04 in Controller
	  Normal  NodeReady                2m41s            kubelet          Node ha-505269-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug29 19:17] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050955] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040050] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.788550] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.497610] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.581059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.251780] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.063740] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055892] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.200106] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.121292] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.278954] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.975871] systemd-fstab-generator[753]: Ignoring "noauto" option for root device
	[Aug29 19:18] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.064250] kauditd_printk_skb: 158 callbacks suppressed
	[  +8.795220] kauditd_printk_skb: 79 callbacks suppressed
	[  +1.256542] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +6.194607] kauditd_printk_skb: 54 callbacks suppressed
	[Aug29 19:19] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [52fd2d668a925e55c334400b5d61dde55232d771983a9073e97f0ff37a6062fc] <==
	{"level":"warn","ts":"2024-08-29T19:24:30.626228Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:24:30.635220Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:24:30.639307Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:24:30.648385Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:24:30.659556Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:24:30.675162Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:24:30.680063Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:24:30.684346Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:24:30.692101Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:24:30.702409Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:24:30.709175Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:24:30.709772Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:24:30.714840Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:24:30.718915Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:24:30.724791Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:24:30.733463Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:24:30.739801Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:24:30.743494Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:24:30.746856Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:24:30.750664Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:24:30.760112Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:24:30.766150Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:24:30.809061Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:24:30.835542Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:24:30.845678Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:24:30 up 6 min,  0 users,  load average: 0.48, 0.53, 0.28
	Linux ha-505269 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f5e9dd792be09a6e1909afd5dca42b303bac8476ef04f3254480b0f21ac53604] <==
	I0829 19:23:51.259497       1 main.go:322] Node ha-505269-m04 has CIDR [10.244.3.0/24] 
	I0829 19:24:01.264870       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0829 19:24:01.265052       1 main.go:322] Node ha-505269-m02 has CIDR [10.244.1.0/24] 
	I0829 19:24:01.265237       1 main.go:295] Handling node with IPs: map[192.168.39.178:{}]
	I0829 19:24:01.265266       1 main.go:322] Node ha-505269-m03 has CIDR [10.244.2.0/24] 
	I0829 19:24:01.265336       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0829 19:24:01.265358       1 main.go:322] Node ha-505269-m04 has CIDR [10.244.3.0/24] 
	I0829 19:24:01.265439       1 main.go:295] Handling node with IPs: map[192.168.39.56:{}]
	I0829 19:24:01.265459       1 main.go:299] handling current node
	I0829 19:24:11.264756       1 main.go:295] Handling node with IPs: map[192.168.39.56:{}]
	I0829 19:24:11.264798       1 main.go:299] handling current node
	I0829 19:24:11.264812       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0829 19:24:11.264817       1 main.go:322] Node ha-505269-m02 has CIDR [10.244.1.0/24] 
	I0829 19:24:11.265045       1 main.go:295] Handling node with IPs: map[192.168.39.178:{}]
	I0829 19:24:11.265052       1 main.go:322] Node ha-505269-m03 has CIDR [10.244.2.0/24] 
	I0829 19:24:11.265129       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0829 19:24:11.265151       1 main.go:322] Node ha-505269-m04 has CIDR [10.244.3.0/24] 
	I0829 19:24:21.259102       1 main.go:295] Handling node with IPs: map[192.168.39.56:{}]
	I0829 19:24:21.259255       1 main.go:299] handling current node
	I0829 19:24:21.259284       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0829 19:24:21.259303       1 main.go:322] Node ha-505269-m02 has CIDR [10.244.1.0/24] 
	I0829 19:24:21.259466       1 main.go:295] Handling node with IPs: map[192.168.39.178:{}]
	I0829 19:24:21.259493       1 main.go:322] Node ha-505269-m03 has CIDR [10.244.2.0/24] 
	I0829 19:24:21.259570       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0829 19:24:21.259589       1 main.go:322] Node ha-505269-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [65b2531e5990db96ea59b71af6b9036995e91f182477ba5d381b96877d38e296] <==
	I0829 19:18:14.422957       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0829 19:18:14.436643       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0829 19:18:14.444895       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0829 19:18:16.522373       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0829 19:18:16.728762       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0829 19:20:26.726355       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0829 19:20:26.726657       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 10.582µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0829 19:20:26.727797       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0829 19:20:26.729075       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0829 19:20:26.730327       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.000383ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0829 19:20:57.644511       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55132: use of closed network connection
	E0829 19:20:57.838367       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55144: use of closed network connection
	E0829 19:20:58.019896       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55166: use of closed network connection
	E0829 19:20:58.240285       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55194: use of closed network connection
	E0829 19:20:58.423371       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55222: use of closed network connection
	E0829 19:20:58.606211       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55246: use of closed network connection
	E0829 19:20:58.781714       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55270: use of closed network connection
	E0829 19:20:58.954357       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55280: use of closed network connection
	E0829 19:20:59.125211       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55290: use of closed network connection
	E0829 19:20:59.418090       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55312: use of closed network connection
	E0829 19:20:59.592142       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55330: use of closed network connection
	E0829 19:20:59.762687       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55350: use of closed network connection
	E0829 19:20:59.935901       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55374: use of closed network connection
	E0829 19:21:00.111552       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55396: use of closed network connection
	E0829 19:21:00.285217       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55412: use of closed network connection
	
	
	==> kube-controller-manager [960e616b3c0582c8ccb0ec4fea8d02c0e2bfd2d0f81a273ac203cbf5eb6e4d6c] <==
	I0829 19:21:30.427794       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-505269-m04" podCIDRs=["10.244.3.0/24"]
	I0829 19:21:30.427933       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:21:30.428057       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:21:30.443076       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:21:30.519078       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:21:30.945173       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:21:31.001063       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-505269-m04"
	I0829 19:21:31.066448       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:21:33.127656       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:21:33.238447       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:21:34.634773       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:21:34.717155       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:21:40.501644       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:21:49.774119       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-505269-m04"
	I0829 19:21:49.774408       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:21:49.792045       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:21:51.020797       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:22:00.645170       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:22:44.664748       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m02"
	I0829 19:22:44.664838       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-505269-m04"
	I0829 19:22:44.688247       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m02"
	I0829 19:22:44.765526       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="19.581395ms"
	I0829 19:22:44.765899       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="98.463µs"
	I0829 19:22:46.068957       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m02"
	I0829 19:22:49.919964       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m02"
	
	
	==> kube-proxy [9b0cc96d9477c950d00d6920b1c4466bfd2e75dc0980383cdf073b75f577c37c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 19:18:17.512883       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 19:18:17.526500       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.56"]
	E0829 19:18:17.526650       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 19:18:17.569558       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 19:18:17.569593       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 19:18:17.569654       1 server_linux.go:169] "Using iptables Proxier"
	I0829 19:18:17.573082       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 19:18:17.573395       1 server.go:483] "Version info" version="v1.31.0"
	I0829 19:18:17.573558       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:18:17.576139       1 config.go:197] "Starting service config controller"
	I0829 19:18:17.576284       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 19:18:17.576358       1 config.go:104] "Starting endpoint slice config controller"
	I0829 19:18:17.576385       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 19:18:17.579686       1 config.go:326] "Starting node config controller"
	I0829 19:18:17.579742       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 19:18:17.677501       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 19:18:17.677687       1 shared_informer.go:320] Caches are synced for service config
	I0829 19:18:17.680371       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d1f91ce133bedd34081facf255eed45036eacfd938fbf58545d302a01d638dc0] <==
	E0829 19:18:10.456254       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0829 19:18:12.513942       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0829 19:20:54.912335       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hcgzg\": pod busybox-7dff88458-hcgzg is already assigned to node \"ha-505269-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-hcgzg" node="ha-505269-m02"
	E0829 19:20:54.912578       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c2ae2636-418f-474d-8cc0-8b35b6a63726(default/busybox-7dff88458-hcgzg) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-hcgzg"
	E0829 19:20:54.912642       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hcgzg\": pod busybox-7dff88458-hcgzg is already assigned to node \"ha-505269-m02\"" pod="default/busybox-7dff88458-hcgzg"
	I0829 19:20:54.912696       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-hcgzg" node="ha-505269-m02"
	E0829 19:20:54.968272       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-2fh45\": pod busybox-7dff88458-2fh45 is already assigned to node \"ha-505269-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-2fh45" node="ha-505269-m03"
	E0829 19:20:54.968416       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 0dad74f1-a221-4897-96b7-109169b8c6d0(default/busybox-7dff88458-2fh45) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-2fh45"
	E0829 19:20:54.968500       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-2fh45\": pod busybox-7dff88458-2fh45 is already assigned to node \"ha-505269-m03\"" pod="default/busybox-7dff88458-2fh45"
	I0829 19:20:54.968611       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-2fh45" node="ha-505269-m03"
	E0829 19:21:30.486227       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-czplj\": pod kindnet-czplj is already assigned to node \"ha-505269-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-czplj" node="ha-505269-m04"
	E0829 19:21:30.486338       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-czplj\": pod kindnet-czplj is already assigned to node \"ha-505269-m04\"" pod="kube-system/kindnet-czplj"
	I0829 19:21:30.486392       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-czplj" node="ha-505269-m04"
	E0829 19:21:30.486895       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-b5p66\": pod kube-proxy-b5p66 is already assigned to node \"ha-505269-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-b5p66" node="ha-505269-m04"
	E0829 19:21:30.487063       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f908ff83-8bf9-44a0-bda1-98f00b910faa(kube-system/kube-proxy-b5p66) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-b5p66"
	E0829 19:21:30.487084       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-b5p66\": pod kube-proxy-b5p66 is already assigned to node \"ha-505269-m04\"" pod="kube-system/kube-proxy-b5p66"
	I0829 19:21:30.487103       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-b5p66" node="ha-505269-m04"
	E0829 19:21:30.554445       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-shg8j\": pod kube-proxy-shg8j is already assigned to node \"ha-505269-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-shg8j" node="ha-505269-m04"
	E0829 19:21:30.554616       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 05405fa6-d40f-446d-ad32-18b243d7b162(kube-system/kube-proxy-shg8j) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-shg8j"
	E0829 19:21:30.554728       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-shg8j\": pod kube-proxy-shg8j is already assigned to node \"ha-505269-m04\"" pod="kube-system/kube-proxy-shg8j"
	I0829 19:21:30.554863       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-shg8j" node="ha-505269-m04"
	E0829 19:21:30.555526       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5lkbf\": pod kindnet-5lkbf is already assigned to node \"ha-505269-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-5lkbf" node="ha-505269-m04"
	E0829 19:21:30.558296       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 112e2462-a26a-4f91-a405-dab3468f9071(kube-system/kindnet-5lkbf) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-5lkbf"
	E0829 19:21:30.559049       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5lkbf\": pod kindnet-5lkbf is already assigned to node \"ha-505269-m04\"" pod="kube-system/kindnet-5lkbf"
	I0829 19:21:30.559106       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-5lkbf" node="ha-505269-m04"
	
	
	==> kubelet <==
	Aug 29 19:23:14 ha-505269 kubelet[1314]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 19:23:14 ha-505269 kubelet[1314]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 19:23:14 ha-505269 kubelet[1314]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 19:23:14 ha-505269 kubelet[1314]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 19:23:14 ha-505269 kubelet[1314]: E0829 19:23:14.472117    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959394471788489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:23:14 ha-505269 kubelet[1314]: E0829 19:23:14.472164    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959394471788489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:23:24 ha-505269 kubelet[1314]: E0829 19:23:24.473645    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959404473278340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:23:24 ha-505269 kubelet[1314]: E0829 19:23:24.473953    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959404473278340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:23:34 ha-505269 kubelet[1314]: E0829 19:23:34.476635    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959414476133397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:23:34 ha-505269 kubelet[1314]: E0829 19:23:34.477194    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959414476133397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:23:44 ha-505269 kubelet[1314]: E0829 19:23:44.479121    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959424478695745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:23:44 ha-505269 kubelet[1314]: E0829 19:23:44.479493    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959424478695745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:23:54 ha-505269 kubelet[1314]: E0829 19:23:54.481906    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959434481545665,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:23:54 ha-505269 kubelet[1314]: E0829 19:23:54.481933    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959434481545665,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:24:04 ha-505269 kubelet[1314]: E0829 19:24:04.484356    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959444483891124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:24:04 ha-505269 kubelet[1314]: E0829 19:24:04.484798    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959444483891124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:24:14 ha-505269 kubelet[1314]: E0829 19:24:14.380483    1314 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 19:24:14 ha-505269 kubelet[1314]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 19:24:14 ha-505269 kubelet[1314]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 19:24:14 ha-505269 kubelet[1314]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 19:24:14 ha-505269 kubelet[1314]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 19:24:14 ha-505269 kubelet[1314]: E0829 19:24:14.487395    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959454486775043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:24:14 ha-505269 kubelet[1314]: E0829 19:24:14.487456    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959454486775043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:24:24 ha-505269 kubelet[1314]: E0829 19:24:24.489243    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959464488866465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:24:24 ha-505269 kubelet[1314]: E0829 19:24:24.489290    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959464488866465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-505269 -n ha-505269
helpers_test.go:261: (dbg) Run:  kubectl --context ha-505269 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (56.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-505269 status -v=7 --alsologtostderr: exit status 3 (3.192516888s)

                                                
                                                
-- stdout --
	ha-505269
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-505269-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-505269-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-505269-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:24:35.336761   35139 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:24:35.336859   35139 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:24:35.336867   35139 out.go:358] Setting ErrFile to fd 2...
	I0829 19:24:35.336871   35139 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:24:35.337035   35139 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 19:24:35.337191   35139 out.go:352] Setting JSON to false
	I0829 19:24:35.337215   35139 mustload.go:65] Loading cluster: ha-505269
	I0829 19:24:35.337274   35139 notify.go:220] Checking for updates...
	I0829 19:24:35.337560   35139 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:24:35.337573   35139 status.go:255] checking status of ha-505269 ...
	I0829 19:24:35.337911   35139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:35.337971   35139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:35.353092   35139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43685
	I0829 19:24:35.353501   35139 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:35.354035   35139 main.go:141] libmachine: Using API Version  1
	I0829 19:24:35.354059   35139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:35.354419   35139 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:35.354662   35139 main.go:141] libmachine: (ha-505269) Calling .GetState
	I0829 19:24:35.356291   35139 status.go:330] ha-505269 host status = "Running" (err=<nil>)
	I0829 19:24:35.356318   35139 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:24:35.356688   35139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:35.356744   35139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:35.371744   35139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46465
	I0829 19:24:35.372252   35139 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:35.372725   35139 main.go:141] libmachine: Using API Version  1
	I0829 19:24:35.372748   35139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:35.373045   35139 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:35.373231   35139 main.go:141] libmachine: (ha-505269) Calling .GetIP
	I0829 19:24:35.376121   35139 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:24:35.376582   35139 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:24:35.376612   35139 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:24:35.376713   35139 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:24:35.377047   35139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:35.377080   35139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:35.391837   35139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33737
	I0829 19:24:35.392234   35139 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:35.392682   35139 main.go:141] libmachine: Using API Version  1
	I0829 19:24:35.392699   35139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:35.393024   35139 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:35.393207   35139 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:24:35.393384   35139 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:24:35.393414   35139 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:24:35.396278   35139 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:24:35.396684   35139 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:24:35.396712   35139 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:24:35.396868   35139 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:24:35.397058   35139 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:24:35.397187   35139 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:24:35.397310   35139 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:24:35.482330   35139 ssh_runner.go:195] Run: systemctl --version
	I0829 19:24:35.489265   35139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:24:35.503634   35139 kubeconfig.go:125] found "ha-505269" server: "https://192.168.39.254:8443"
	I0829 19:24:35.503667   35139 api_server.go:166] Checking apiserver status ...
	I0829 19:24:35.503696   35139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:24:35.517278   35139 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1092/cgroup
	W0829 19:24:35.527821   35139 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1092/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:24:35.527876   35139 ssh_runner.go:195] Run: ls
	I0829 19:24:35.532801   35139 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 19:24:35.538118   35139 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 19:24:35.538144   35139 status.go:422] ha-505269 apiserver status = Running (err=<nil>)
	I0829 19:24:35.538155   35139 status.go:257] ha-505269 status: &{Name:ha-505269 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:24:35.538217   35139 status.go:255] checking status of ha-505269-m02 ...
	I0829 19:24:35.538527   35139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:35.538582   35139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:35.554070   35139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45903
	I0829 19:24:35.554426   35139 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:35.554920   35139 main.go:141] libmachine: Using API Version  1
	I0829 19:24:35.554941   35139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:35.555219   35139 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:35.555400   35139 main.go:141] libmachine: (ha-505269-m02) Calling .GetState
	I0829 19:24:35.556935   35139 status.go:330] ha-505269-m02 host status = "Running" (err=<nil>)
	I0829 19:24:35.556950   35139 host.go:66] Checking if "ha-505269-m02" exists ...
	I0829 19:24:35.557294   35139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:35.557332   35139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:35.571817   35139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35653
	I0829 19:24:35.572255   35139 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:35.572803   35139 main.go:141] libmachine: Using API Version  1
	I0829 19:24:35.572838   35139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:35.573236   35139 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:35.573404   35139 main.go:141] libmachine: (ha-505269-m02) Calling .GetIP
	I0829 19:24:35.576212   35139 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:24:35.576666   35139 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:24:35.576690   35139 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:24:35.576817   35139 host.go:66] Checking if "ha-505269-m02" exists ...
	I0829 19:24:35.577232   35139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:35.577275   35139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:35.591538   35139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44927
	I0829 19:24:35.591955   35139 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:35.592391   35139 main.go:141] libmachine: Using API Version  1
	I0829 19:24:35.592420   35139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:35.592752   35139 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:35.592924   35139 main.go:141] libmachine: (ha-505269-m02) Calling .DriverName
	I0829 19:24:35.593102   35139 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:24:35.593121   35139 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:24:35.595886   35139 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:24:35.596272   35139 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:24:35.596293   35139 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:24:35.596411   35139 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:24:35.596582   35139 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:24:35.596697   35139 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:24:35.596827   35139 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/id_rsa Username:docker}
	W0829 19:24:38.146786   35139 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	W0829 19:24:38.146892   35139 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	E0829 19:24:38.146913   35139 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0829 19:24:38.146920   35139 status.go:257] ha-505269-m02 status: &{Name:ha-505269-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0829 19:24:38.146936   35139 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0829 19:24:38.146944   35139 status.go:255] checking status of ha-505269-m03 ...
	I0829 19:24:38.147251   35139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:38.147293   35139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:38.163928   35139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41959
	I0829 19:24:38.164359   35139 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:38.164884   35139 main.go:141] libmachine: Using API Version  1
	I0829 19:24:38.164912   35139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:38.165285   35139 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:38.165497   35139 main.go:141] libmachine: (ha-505269-m03) Calling .GetState
	I0829 19:24:38.167086   35139 status.go:330] ha-505269-m03 host status = "Running" (err=<nil>)
	I0829 19:24:38.167101   35139 host.go:66] Checking if "ha-505269-m03" exists ...
	I0829 19:24:38.167445   35139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:38.167483   35139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:38.181817   35139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34797
	I0829 19:24:38.182225   35139 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:38.182707   35139 main.go:141] libmachine: Using API Version  1
	I0829 19:24:38.182726   35139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:38.183011   35139 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:38.183192   35139 main.go:141] libmachine: (ha-505269-m03) Calling .GetIP
	I0829 19:24:38.185891   35139 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:24:38.186269   35139 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:24:38.186284   35139 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:24:38.186410   35139 host.go:66] Checking if "ha-505269-m03" exists ...
	I0829 19:24:38.186729   35139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:38.186760   35139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:38.200874   35139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45213
	I0829 19:24:38.201311   35139 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:38.201785   35139 main.go:141] libmachine: Using API Version  1
	I0829 19:24:38.201807   35139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:38.202107   35139 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:38.202278   35139 main.go:141] libmachine: (ha-505269-m03) Calling .DriverName
	I0829 19:24:38.202457   35139 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:24:38.202480   35139 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:24:38.205105   35139 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:24:38.205484   35139 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:24:38.205507   35139 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:24:38.205639   35139 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:24:38.205788   35139 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:24:38.205934   35139 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:24:38.206063   35139 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/id_rsa Username:docker}
	I0829 19:24:38.286724   35139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:24:38.302612   35139 kubeconfig.go:125] found "ha-505269" server: "https://192.168.39.254:8443"
	I0829 19:24:38.302642   35139 api_server.go:166] Checking apiserver status ...
	I0829 19:24:38.302680   35139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:24:38.317574   35139 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1454/cgroup
	W0829 19:24:38.326662   35139 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1454/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:24:38.326712   35139 ssh_runner.go:195] Run: ls
	I0829 19:24:38.331895   35139 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 19:24:38.336106   35139 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 19:24:38.336132   35139 status.go:422] ha-505269-m03 apiserver status = Running (err=<nil>)
	I0829 19:24:38.336140   35139 status.go:257] ha-505269-m03 status: &{Name:ha-505269-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:24:38.336154   35139 status.go:255] checking status of ha-505269-m04 ...
	I0829 19:24:38.336421   35139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:38.336451   35139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:38.351116   35139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40647
	I0829 19:24:38.351490   35139 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:38.351876   35139 main.go:141] libmachine: Using API Version  1
	I0829 19:24:38.351892   35139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:38.352223   35139 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:38.352385   35139 main.go:141] libmachine: (ha-505269-m04) Calling .GetState
	I0829 19:24:38.353833   35139 status.go:330] ha-505269-m04 host status = "Running" (err=<nil>)
	I0829 19:24:38.353853   35139 host.go:66] Checking if "ha-505269-m04" exists ...
	I0829 19:24:38.354136   35139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:38.354170   35139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:38.368683   35139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43785
	I0829 19:24:38.369134   35139 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:38.369579   35139 main.go:141] libmachine: Using API Version  1
	I0829 19:24:38.369598   35139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:38.369958   35139 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:38.370158   35139 main.go:141] libmachine: (ha-505269-m04) Calling .GetIP
	I0829 19:24:38.372796   35139 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:24:38.373247   35139 main.go:141] libmachine: (ha-505269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:46:e7", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:21:15 +0000 UTC Type:0 Mac:52:54:00:44:46:e7 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-505269-m04 Clientid:01:52:54:00:44:46:e7}
	I0829 19:24:38.373273   35139 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined IP address 192.168.39.101 and MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:24:38.373339   35139 host.go:66] Checking if "ha-505269-m04" exists ...
	I0829 19:24:38.373680   35139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:38.373719   35139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:38.388234   35139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41553
	I0829 19:24:38.388633   35139 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:38.389096   35139 main.go:141] libmachine: Using API Version  1
	I0829 19:24:38.389117   35139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:38.389450   35139 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:38.389593   35139 main.go:141] libmachine: (ha-505269-m04) Calling .DriverName
	I0829 19:24:38.389769   35139 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:24:38.389790   35139 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHHostname
	I0829 19:24:38.392321   35139 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:24:38.392815   35139 main.go:141] libmachine: (ha-505269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:46:e7", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:21:15 +0000 UTC Type:0 Mac:52:54:00:44:46:e7 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-505269-m04 Clientid:01:52:54:00:44:46:e7}
	I0829 19:24:38.392838   35139 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined IP address 192.168.39.101 and MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:24:38.392976   35139 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHPort
	I0829 19:24:38.393125   35139 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHKeyPath
	I0829 19:24:38.393271   35139 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHUsername
	I0829 19:24:38.393397   35139 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m04/id_rsa Username:docker}
	I0829 19:24:38.473670   35139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:24:38.487724   35139 status.go:257] ha-505269-m04 status: &{Name:ha-505269-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-505269 status -v=7 --alsologtostderr: exit status 3 (4.847931398s)

                                                
                                                
-- stdout --
	ha-505269
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-505269-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-505269-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-505269-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:24:39.830400   35239 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:24:39.830663   35239 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:24:39.830673   35239 out.go:358] Setting ErrFile to fd 2...
	I0829 19:24:39.830677   35239 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:24:39.830854   35239 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 19:24:39.831020   35239 out.go:352] Setting JSON to false
	I0829 19:24:39.831043   35239 mustload.go:65] Loading cluster: ha-505269
	I0829 19:24:39.831156   35239 notify.go:220] Checking for updates...
	I0829 19:24:39.831434   35239 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:24:39.831447   35239 status.go:255] checking status of ha-505269 ...
	I0829 19:24:39.831826   35239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:39.831881   35239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:39.849604   35239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35153
	I0829 19:24:39.849986   35239 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:39.850531   35239 main.go:141] libmachine: Using API Version  1
	I0829 19:24:39.850584   35239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:39.850916   35239 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:39.851112   35239 main.go:141] libmachine: (ha-505269) Calling .GetState
	I0829 19:24:39.852648   35239 status.go:330] ha-505269 host status = "Running" (err=<nil>)
	I0829 19:24:39.852664   35239 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:24:39.852952   35239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:39.852985   35239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:39.867127   35239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46875
	I0829 19:24:39.867483   35239 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:39.867960   35239 main.go:141] libmachine: Using API Version  1
	I0829 19:24:39.867978   35239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:39.868264   35239 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:39.868447   35239 main.go:141] libmachine: (ha-505269) Calling .GetIP
	I0829 19:24:39.871251   35239 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:24:39.871731   35239 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:24:39.871763   35239 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:24:39.871904   35239 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:24:39.872282   35239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:39.872334   35239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:39.887231   35239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41243
	I0829 19:24:39.887617   35239 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:39.888066   35239 main.go:141] libmachine: Using API Version  1
	I0829 19:24:39.888090   35239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:39.888403   35239 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:39.888613   35239 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:24:39.888828   35239 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:24:39.888854   35239 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:24:39.891629   35239 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:24:39.892063   35239 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:24:39.892096   35239 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:24:39.892240   35239 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:24:39.892400   35239 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:24:39.892553   35239 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:24:39.892686   35239 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:24:39.978814   35239 ssh_runner.go:195] Run: systemctl --version
	I0829 19:24:39.987175   35239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:24:40.003919   35239 kubeconfig.go:125] found "ha-505269" server: "https://192.168.39.254:8443"
	I0829 19:24:40.003962   35239 api_server.go:166] Checking apiserver status ...
	I0829 19:24:40.004002   35239 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:24:40.019117   35239 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1092/cgroup
	W0829 19:24:40.029098   35239 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1092/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:24:40.029147   35239 ssh_runner.go:195] Run: ls
	I0829 19:24:40.033591   35239 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 19:24:40.037460   35239 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 19:24:40.037479   35239 status.go:422] ha-505269 apiserver status = Running (err=<nil>)
	I0829 19:24:40.037488   35239 status.go:257] ha-505269 status: &{Name:ha-505269 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:24:40.037502   35239 status.go:255] checking status of ha-505269-m02 ...
	I0829 19:24:40.037786   35239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:40.037824   35239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:40.052619   35239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36645
	I0829 19:24:40.053040   35239 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:40.053514   35239 main.go:141] libmachine: Using API Version  1
	I0829 19:24:40.053536   35239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:40.053831   35239 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:40.053983   35239 main.go:141] libmachine: (ha-505269-m02) Calling .GetState
	I0829 19:24:40.055385   35239 status.go:330] ha-505269-m02 host status = "Running" (err=<nil>)
	I0829 19:24:40.055402   35239 host.go:66] Checking if "ha-505269-m02" exists ...
	I0829 19:24:40.055743   35239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:40.055781   35239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:40.069825   35239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36441
	I0829 19:24:40.070166   35239 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:40.070664   35239 main.go:141] libmachine: Using API Version  1
	I0829 19:24:40.070688   35239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:40.070994   35239 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:40.071187   35239 main.go:141] libmachine: (ha-505269-m02) Calling .GetIP
	I0829 19:24:40.073862   35239 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:24:40.074291   35239 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:24:40.074311   35239 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:24:40.074502   35239 host.go:66] Checking if "ha-505269-m02" exists ...
	I0829 19:24:40.074823   35239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:40.074854   35239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:40.089265   35239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45375
	I0829 19:24:40.089629   35239 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:40.090094   35239 main.go:141] libmachine: Using API Version  1
	I0829 19:24:40.090113   35239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:40.090423   35239 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:40.090622   35239 main.go:141] libmachine: (ha-505269-m02) Calling .DriverName
	I0829 19:24:40.090788   35239 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:24:40.090808   35239 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:24:40.093383   35239 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:24:40.093739   35239 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:24:40.093764   35239 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:24:40.093928   35239 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:24:40.094065   35239 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:24:40.094191   35239 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:24:40.094299   35239 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/id_rsa Username:docker}
	W0829 19:24:41.218855   35239 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	I0829 19:24:41.218912   35239 retry.go:31] will retry after 227.430994ms: dial tcp 192.168.39.68:22: connect: no route to host
	W0829 19:24:44.294804   35239 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	W0829 19:24:44.294907   35239 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	E0829 19:24:44.294931   35239 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0829 19:24:44.294944   35239 status.go:257] ha-505269-m02 status: &{Name:ha-505269-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0829 19:24:44.294962   35239 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0829 19:24:44.294984   35239 status.go:255] checking status of ha-505269-m03 ...
	I0829 19:24:44.295309   35239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:44.295360   35239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:44.310629   35239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37635
	I0829 19:24:44.311042   35239 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:44.311534   35239 main.go:141] libmachine: Using API Version  1
	I0829 19:24:44.311559   35239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:44.311880   35239 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:44.312044   35239 main.go:141] libmachine: (ha-505269-m03) Calling .GetState
	I0829 19:24:44.313500   35239 status.go:330] ha-505269-m03 host status = "Running" (err=<nil>)
	I0829 19:24:44.313515   35239 host.go:66] Checking if "ha-505269-m03" exists ...
	I0829 19:24:44.313803   35239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:44.313835   35239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:44.328457   35239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44539
	I0829 19:24:44.328890   35239 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:44.329373   35239 main.go:141] libmachine: Using API Version  1
	I0829 19:24:44.329395   35239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:44.329735   35239 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:44.329923   35239 main.go:141] libmachine: (ha-505269-m03) Calling .GetIP
	I0829 19:24:44.332855   35239 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:24:44.333363   35239 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:24:44.333403   35239 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:24:44.333562   35239 host.go:66] Checking if "ha-505269-m03" exists ...
	I0829 19:24:44.333968   35239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:44.334010   35239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:44.349859   35239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46537
	I0829 19:24:44.350231   35239 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:44.350723   35239 main.go:141] libmachine: Using API Version  1
	I0829 19:24:44.350753   35239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:44.351112   35239 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:44.351291   35239 main.go:141] libmachine: (ha-505269-m03) Calling .DriverName
	I0829 19:24:44.351453   35239 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:24:44.351471   35239 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:24:44.354275   35239 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:24:44.354784   35239 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:24:44.354822   35239 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:24:44.354980   35239 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:24:44.355155   35239 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:24:44.355312   35239 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:24:44.355455   35239 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/id_rsa Username:docker}
	I0829 19:24:44.434642   35239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:24:44.448790   35239 kubeconfig.go:125] found "ha-505269" server: "https://192.168.39.254:8443"
	I0829 19:24:44.448813   35239 api_server.go:166] Checking apiserver status ...
	I0829 19:24:44.448845   35239 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:24:44.461700   35239 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1454/cgroup
	W0829 19:24:44.470411   35239 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1454/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:24:44.470461   35239 ssh_runner.go:195] Run: ls
	I0829 19:24:44.474968   35239 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 19:24:44.479184   35239 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 19:24:44.479208   35239 status.go:422] ha-505269-m03 apiserver status = Running (err=<nil>)
	I0829 19:24:44.479216   35239 status.go:257] ha-505269-m03 status: &{Name:ha-505269-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:24:44.479229   35239 status.go:255] checking status of ha-505269-m04 ...
	I0829 19:24:44.479513   35239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:44.479545   35239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:44.498113   35239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39343
	I0829 19:24:44.498519   35239 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:44.499062   35239 main.go:141] libmachine: Using API Version  1
	I0829 19:24:44.499084   35239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:44.499377   35239 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:44.499579   35239 main.go:141] libmachine: (ha-505269-m04) Calling .GetState
	I0829 19:24:44.501408   35239 status.go:330] ha-505269-m04 host status = "Running" (err=<nil>)
	I0829 19:24:44.501421   35239 host.go:66] Checking if "ha-505269-m04" exists ...
	I0829 19:24:44.501735   35239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:44.501773   35239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:44.515999   35239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41785
	I0829 19:24:44.516423   35239 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:44.516883   35239 main.go:141] libmachine: Using API Version  1
	I0829 19:24:44.516910   35239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:44.517188   35239 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:44.517354   35239 main.go:141] libmachine: (ha-505269-m04) Calling .GetIP
	I0829 19:24:44.520041   35239 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:24:44.520462   35239 main.go:141] libmachine: (ha-505269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:46:e7", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:21:15 +0000 UTC Type:0 Mac:52:54:00:44:46:e7 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-505269-m04 Clientid:01:52:54:00:44:46:e7}
	I0829 19:24:44.520497   35239 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined IP address 192.168.39.101 and MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:24:44.520617   35239 host.go:66] Checking if "ha-505269-m04" exists ...
	I0829 19:24:44.520918   35239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:44.520959   35239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:44.535324   35239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34845
	I0829 19:24:44.535661   35239 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:44.536099   35239 main.go:141] libmachine: Using API Version  1
	I0829 19:24:44.536117   35239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:44.536375   35239 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:44.536496   35239 main.go:141] libmachine: (ha-505269-m04) Calling .DriverName
	I0829 19:24:44.536618   35239 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:24:44.536632   35239 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHHostname
	I0829 19:24:44.539235   35239 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:24:44.539583   35239 main.go:141] libmachine: (ha-505269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:46:e7", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:21:15 +0000 UTC Type:0 Mac:52:54:00:44:46:e7 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-505269-m04 Clientid:01:52:54:00:44:46:e7}
	I0829 19:24:44.539600   35239 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined IP address 192.168.39.101 and MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:24:44.539778   35239 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHPort
	I0829 19:24:44.539966   35239 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHKeyPath
	I0829 19:24:44.540111   35239 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHUsername
	I0829 19:24:44.540214   35239 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m04/id_rsa Username:docker}
	I0829 19:24:44.621542   35239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:24:44.636059   35239 status.go:257] ha-505269-m04 status: &{Name:ha-505269-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-505269 status -v=7 --alsologtostderr: exit status 3 (4.961584554s)

                                                
                                                
-- stdout --
	ha-505269
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-505269-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-505269-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-505269-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:24:45.851894   35356 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:24:45.852263   35356 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:24:45.852275   35356 out.go:358] Setting ErrFile to fd 2...
	I0829 19:24:45.852282   35356 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:24:45.852753   35356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 19:24:45.853145   35356 out.go:352] Setting JSON to false
	I0829 19:24:45.853176   35356 mustload.go:65] Loading cluster: ha-505269
	I0829 19:24:45.853274   35356 notify.go:220] Checking for updates...
	I0829 19:24:45.853584   35356 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:24:45.853607   35356 status.go:255] checking status of ha-505269 ...
	I0829 19:24:45.853984   35356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:45.854058   35356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:45.873501   35356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34949
	I0829 19:24:45.873882   35356 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:45.874399   35356 main.go:141] libmachine: Using API Version  1
	I0829 19:24:45.874424   35356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:45.874790   35356 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:45.874986   35356 main.go:141] libmachine: (ha-505269) Calling .GetState
	I0829 19:24:45.876562   35356 status.go:330] ha-505269 host status = "Running" (err=<nil>)
	I0829 19:24:45.876577   35356 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:24:45.876975   35356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:45.877018   35356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:45.891983   35356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34977
	I0829 19:24:45.892386   35356 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:45.892888   35356 main.go:141] libmachine: Using API Version  1
	I0829 19:24:45.892914   35356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:45.893270   35356 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:45.893466   35356 main.go:141] libmachine: (ha-505269) Calling .GetIP
	I0829 19:24:45.896320   35356 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:24:45.896827   35356 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:24:45.896861   35356 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:24:45.897005   35356 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:24:45.897295   35356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:45.897339   35356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:45.911947   35356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33857
	I0829 19:24:45.912424   35356 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:45.912874   35356 main.go:141] libmachine: Using API Version  1
	I0829 19:24:45.912898   35356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:45.913201   35356 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:45.913377   35356 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:24:45.913543   35356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:24:45.913564   35356 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:24:45.916279   35356 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:24:45.916685   35356 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:24:45.916698   35356 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:24:45.916849   35356 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:24:45.917049   35356 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:24:45.917162   35356 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:24:45.917311   35356 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:24:46.002142   35356 ssh_runner.go:195] Run: systemctl --version
	I0829 19:24:46.008991   35356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:24:46.034185   35356 kubeconfig.go:125] found "ha-505269" server: "https://192.168.39.254:8443"
	I0829 19:24:46.034225   35356 api_server.go:166] Checking apiserver status ...
	I0829 19:24:46.034261   35356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:24:46.048026   35356 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1092/cgroup
	W0829 19:24:46.059378   35356 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1092/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:24:46.059435   35356 ssh_runner.go:195] Run: ls
	I0829 19:24:46.064438   35356 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 19:24:46.070189   35356 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 19:24:46.070211   35356 status.go:422] ha-505269 apiserver status = Running (err=<nil>)
	I0829 19:24:46.070223   35356 status.go:257] ha-505269 status: &{Name:ha-505269 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:24:46.070242   35356 status.go:255] checking status of ha-505269-m02 ...
	I0829 19:24:46.070525   35356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:46.070572   35356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:46.085013   35356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34589
	I0829 19:24:46.085468   35356 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:46.085953   35356 main.go:141] libmachine: Using API Version  1
	I0829 19:24:46.085976   35356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:46.086238   35356 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:46.086412   35356 main.go:141] libmachine: (ha-505269-m02) Calling .GetState
	I0829 19:24:46.087852   35356 status.go:330] ha-505269-m02 host status = "Running" (err=<nil>)
	I0829 19:24:46.087870   35356 host.go:66] Checking if "ha-505269-m02" exists ...
	I0829 19:24:46.088147   35356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:46.088183   35356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:46.102306   35356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45467
	I0829 19:24:46.102762   35356 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:46.103205   35356 main.go:141] libmachine: Using API Version  1
	I0829 19:24:46.103227   35356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:46.103490   35356 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:46.103650   35356 main.go:141] libmachine: (ha-505269-m02) Calling .GetIP
	I0829 19:24:46.106199   35356 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:24:46.106630   35356 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:24:46.106652   35356 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:24:46.106809   35356 host.go:66] Checking if "ha-505269-m02" exists ...
	I0829 19:24:46.107214   35356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:46.107264   35356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:46.125068   35356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42075
	I0829 19:24:46.125435   35356 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:46.125976   35356 main.go:141] libmachine: Using API Version  1
	I0829 19:24:46.125996   35356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:46.126302   35356 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:46.126493   35356 main.go:141] libmachine: (ha-505269-m02) Calling .DriverName
	I0829 19:24:46.126658   35356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:24:46.126678   35356 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:24:46.129676   35356 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:24:46.130012   35356 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:24:46.130041   35356 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:24:46.130171   35356 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:24:46.130320   35356 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:24:46.130420   35356 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:24:46.130564   35356 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/id_rsa Username:docker}
	W0829 19:24:47.362836   35356 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	I0829 19:24:47.362882   35356 retry.go:31] will retry after 129.051517ms: dial tcp 192.168.39.68:22: connect: no route to host
	W0829 19:24:50.434851   35356 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	W0829 19:24:50.434964   35356 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	E0829 19:24:50.434983   35356 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0829 19:24:50.434997   35356 status.go:257] ha-505269-m02 status: &{Name:ha-505269-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0829 19:24:50.435020   35356 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0829 19:24:50.435033   35356 status.go:255] checking status of ha-505269-m03 ...
	I0829 19:24:50.435437   35356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:50.435488   35356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:50.451573   35356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46375
	I0829 19:24:50.452013   35356 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:50.452467   35356 main.go:141] libmachine: Using API Version  1
	I0829 19:24:50.452486   35356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:50.452778   35356 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:50.452931   35356 main.go:141] libmachine: (ha-505269-m03) Calling .GetState
	I0829 19:24:50.454239   35356 status.go:330] ha-505269-m03 host status = "Running" (err=<nil>)
	I0829 19:24:50.454255   35356 host.go:66] Checking if "ha-505269-m03" exists ...
	I0829 19:24:50.454642   35356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:50.454696   35356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:50.469550   35356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35101
	I0829 19:24:50.469915   35356 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:50.470310   35356 main.go:141] libmachine: Using API Version  1
	I0829 19:24:50.470328   35356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:50.470702   35356 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:50.470891   35356 main.go:141] libmachine: (ha-505269-m03) Calling .GetIP
	I0829 19:24:50.474010   35356 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:24:50.474452   35356 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:24:50.474484   35356 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:24:50.474629   35356 host.go:66] Checking if "ha-505269-m03" exists ...
	I0829 19:24:50.474926   35356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:50.474961   35356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:50.488717   35356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35661
	I0829 19:24:50.489091   35356 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:50.489447   35356 main.go:141] libmachine: Using API Version  1
	I0829 19:24:50.489469   35356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:50.489802   35356 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:50.489956   35356 main.go:141] libmachine: (ha-505269-m03) Calling .DriverName
	I0829 19:24:50.490125   35356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:24:50.490146   35356 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:24:50.492726   35356 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:24:50.493120   35356 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:24:50.493138   35356 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:24:50.493276   35356 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:24:50.493451   35356 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:24:50.493566   35356 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:24:50.493704   35356 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/id_rsa Username:docker}
	I0829 19:24:50.569903   35356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:24:50.584455   35356 kubeconfig.go:125] found "ha-505269" server: "https://192.168.39.254:8443"
	I0829 19:24:50.584479   35356 api_server.go:166] Checking apiserver status ...
	I0829 19:24:50.584511   35356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:24:50.599247   35356 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1454/cgroup
	W0829 19:24:50.609377   35356 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1454/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:24:50.609425   35356 ssh_runner.go:195] Run: ls
	I0829 19:24:50.613859   35356 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 19:24:50.618180   35356 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 19:24:50.618200   35356 status.go:422] ha-505269-m03 apiserver status = Running (err=<nil>)
	I0829 19:24:50.618208   35356 status.go:257] ha-505269-m03 status: &{Name:ha-505269-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:24:50.618225   35356 status.go:255] checking status of ha-505269-m04 ...
	I0829 19:24:50.618522   35356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:50.618576   35356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:50.633485   35356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42439
	I0829 19:24:50.633954   35356 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:50.634417   35356 main.go:141] libmachine: Using API Version  1
	I0829 19:24:50.634443   35356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:50.634749   35356 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:50.634956   35356 main.go:141] libmachine: (ha-505269-m04) Calling .GetState
	I0829 19:24:50.636424   35356 status.go:330] ha-505269-m04 host status = "Running" (err=<nil>)
	I0829 19:24:50.636445   35356 host.go:66] Checking if "ha-505269-m04" exists ...
	I0829 19:24:50.636700   35356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:50.636741   35356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:50.651887   35356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38603
	I0829 19:24:50.652301   35356 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:50.652747   35356 main.go:141] libmachine: Using API Version  1
	I0829 19:24:50.652765   35356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:50.653059   35356 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:50.653193   35356 main.go:141] libmachine: (ha-505269-m04) Calling .GetIP
	I0829 19:24:50.656094   35356 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:24:50.656484   35356 main.go:141] libmachine: (ha-505269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:46:e7", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:21:15 +0000 UTC Type:0 Mac:52:54:00:44:46:e7 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-505269-m04 Clientid:01:52:54:00:44:46:e7}
	I0829 19:24:50.656510   35356 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined IP address 192.168.39.101 and MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:24:50.656649   35356 host.go:66] Checking if "ha-505269-m04" exists ...
	I0829 19:24:50.656935   35356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:50.656966   35356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:50.671850   35356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44189
	I0829 19:24:50.672245   35356 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:50.672780   35356 main.go:141] libmachine: Using API Version  1
	I0829 19:24:50.672798   35356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:50.673071   35356 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:50.673237   35356 main.go:141] libmachine: (ha-505269-m04) Calling .DriverName
	I0829 19:24:50.673418   35356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:24:50.673436   35356 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHHostname
	I0829 19:24:50.676186   35356 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:24:50.676611   35356 main.go:141] libmachine: (ha-505269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:46:e7", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:21:15 +0000 UTC Type:0 Mac:52:54:00:44:46:e7 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-505269-m04 Clientid:01:52:54:00:44:46:e7}
	I0829 19:24:50.676632   35356 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined IP address 192.168.39.101 and MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:24:50.676767   35356 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHPort
	I0829 19:24:50.676903   35356 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHKeyPath
	I0829 19:24:50.677029   35356 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHUsername
	I0829 19:24:50.677140   35356 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m04/id_rsa Username:docker}
	I0829 19:24:50.757934   35356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:24:50.771987   35356 status.go:257] ha-505269-m04 status: &{Name:ha-505269-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-505269 status -v=7 --alsologtostderr: exit status 3 (3.733505618s)

                                                
                                                
-- stdout --
	ha-505269
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-505269-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-505269-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-505269-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:24:53.711837   35456 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:24:53.712091   35456 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:24:53.712101   35456 out.go:358] Setting ErrFile to fd 2...
	I0829 19:24:53.712106   35456 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:24:53.712290   35456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 19:24:53.712455   35456 out.go:352] Setting JSON to false
	I0829 19:24:53.712478   35456 mustload.go:65] Loading cluster: ha-505269
	I0829 19:24:53.712527   35456 notify.go:220] Checking for updates...
	I0829 19:24:53.713009   35456 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:24:53.713030   35456 status.go:255] checking status of ha-505269 ...
	I0829 19:24:53.713477   35456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:53.713528   35456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:53.732441   35456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39189
	I0829 19:24:53.732967   35456 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:53.733484   35456 main.go:141] libmachine: Using API Version  1
	I0829 19:24:53.733502   35456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:53.733919   35456 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:53.734104   35456 main.go:141] libmachine: (ha-505269) Calling .GetState
	I0829 19:24:53.735723   35456 status.go:330] ha-505269 host status = "Running" (err=<nil>)
	I0829 19:24:53.735743   35456 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:24:53.736019   35456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:53.736050   35456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:53.750445   35456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I0829 19:24:53.750782   35456 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:53.751240   35456 main.go:141] libmachine: Using API Version  1
	I0829 19:24:53.751261   35456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:53.751568   35456 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:53.751768   35456 main.go:141] libmachine: (ha-505269) Calling .GetIP
	I0829 19:24:53.754431   35456 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:24:53.754853   35456 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:24:53.754892   35456 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:24:53.754987   35456 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:24:53.755250   35456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:53.755279   35456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:53.770079   35456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39315
	I0829 19:24:53.770455   35456 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:53.770909   35456 main.go:141] libmachine: Using API Version  1
	I0829 19:24:53.770930   35456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:53.771279   35456 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:53.771454   35456 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:24:53.771692   35456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:24:53.771721   35456 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:24:53.774260   35456 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:24:53.774623   35456 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:24:53.774655   35456 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:24:53.774769   35456 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:24:53.774945   35456 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:24:53.775073   35456 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:24:53.775209   35456 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:24:53.862287   35456 ssh_runner.go:195] Run: systemctl --version
	I0829 19:24:53.869377   35456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:24:53.884533   35456 kubeconfig.go:125] found "ha-505269" server: "https://192.168.39.254:8443"
	I0829 19:24:53.884564   35456 api_server.go:166] Checking apiserver status ...
	I0829 19:24:53.884600   35456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:24:53.899446   35456 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1092/cgroup
	W0829 19:24:53.908857   35456 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1092/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:24:53.908899   35456 ssh_runner.go:195] Run: ls
	I0829 19:24:53.913149   35456 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 19:24:53.921935   35456 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 19:24:53.921957   35456 status.go:422] ha-505269 apiserver status = Running (err=<nil>)
	I0829 19:24:53.921967   35456 status.go:257] ha-505269 status: &{Name:ha-505269 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:24:53.921981   35456 status.go:255] checking status of ha-505269-m02 ...
	I0829 19:24:53.922309   35456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:53.922353   35456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:53.938148   35456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45051
	I0829 19:24:53.938497   35456 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:53.938988   35456 main.go:141] libmachine: Using API Version  1
	I0829 19:24:53.939008   35456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:53.939351   35456 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:53.939558   35456 main.go:141] libmachine: (ha-505269-m02) Calling .GetState
	I0829 19:24:53.941125   35456 status.go:330] ha-505269-m02 host status = "Running" (err=<nil>)
	I0829 19:24:53.941142   35456 host.go:66] Checking if "ha-505269-m02" exists ...
	I0829 19:24:53.941485   35456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:53.941523   35456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:53.956141   35456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35091
	I0829 19:24:53.956466   35456 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:53.956897   35456 main.go:141] libmachine: Using API Version  1
	I0829 19:24:53.956916   35456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:53.957229   35456 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:53.957391   35456 main.go:141] libmachine: (ha-505269-m02) Calling .GetIP
	I0829 19:24:53.960114   35456 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:24:53.960463   35456 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:24:53.960488   35456 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:24:53.960569   35456 host.go:66] Checking if "ha-505269-m02" exists ...
	I0829 19:24:53.960854   35456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:53.960890   35456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:53.975139   35456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36513
	I0829 19:24:53.975581   35456 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:53.976073   35456 main.go:141] libmachine: Using API Version  1
	I0829 19:24:53.976096   35456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:53.976406   35456 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:53.976586   35456 main.go:141] libmachine: (ha-505269-m02) Calling .DriverName
	I0829 19:24:53.976756   35456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:24:53.976771   35456 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:24:53.979425   35456 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:24:53.979823   35456 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:24:53.979851   35456 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:24:53.980015   35456 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:24:53.980176   35456 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:24:53.980318   35456 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:24:53.980441   35456 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/id_rsa Username:docker}
	W0829 19:24:57.058869   35456 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	W0829 19:24:57.058952   35456 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	E0829 19:24:57.058979   35456 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0829 19:24:57.058988   35456 status.go:257] ha-505269-m02 status: &{Name:ha-505269-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0829 19:24:57.059014   35456 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0829 19:24:57.059024   35456 status.go:255] checking status of ha-505269-m03 ...
	I0829 19:24:57.059427   35456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:57.059475   35456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:57.073945   35456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46043
	I0829 19:24:57.074359   35456 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:57.074814   35456 main.go:141] libmachine: Using API Version  1
	I0829 19:24:57.074838   35456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:57.075153   35456 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:57.075361   35456 main.go:141] libmachine: (ha-505269-m03) Calling .GetState
	I0829 19:24:57.076909   35456 status.go:330] ha-505269-m03 host status = "Running" (err=<nil>)
	I0829 19:24:57.076927   35456 host.go:66] Checking if "ha-505269-m03" exists ...
	I0829 19:24:57.077201   35456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:57.077259   35456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:57.091771   35456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44077
	I0829 19:24:57.092241   35456 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:57.092723   35456 main.go:141] libmachine: Using API Version  1
	I0829 19:24:57.092746   35456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:57.093051   35456 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:57.093219   35456 main.go:141] libmachine: (ha-505269-m03) Calling .GetIP
	I0829 19:24:57.095522   35456 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:24:57.095908   35456 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:24:57.095941   35456 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:24:57.096107   35456 host.go:66] Checking if "ha-505269-m03" exists ...
	I0829 19:24:57.096411   35456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:57.096458   35456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:57.111529   35456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36999
	I0829 19:24:57.111971   35456 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:57.112421   35456 main.go:141] libmachine: Using API Version  1
	I0829 19:24:57.112441   35456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:57.112788   35456 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:57.112977   35456 main.go:141] libmachine: (ha-505269-m03) Calling .DriverName
	I0829 19:24:57.113143   35456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:24:57.113162   35456 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:24:57.115912   35456 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:24:57.116325   35456 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:24:57.116351   35456 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:24:57.116467   35456 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:24:57.116633   35456 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:24:57.116807   35456 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:24:57.116932   35456 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/id_rsa Username:docker}
	I0829 19:24:57.198112   35456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:24:57.213380   35456 kubeconfig.go:125] found "ha-505269" server: "https://192.168.39.254:8443"
	I0829 19:24:57.213407   35456 api_server.go:166] Checking apiserver status ...
	I0829 19:24:57.213445   35456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:24:57.226827   35456 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1454/cgroup
	W0829 19:24:57.239569   35456 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1454/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:24:57.239617   35456 ssh_runner.go:195] Run: ls
	I0829 19:24:57.245152   35456 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 19:24:57.249497   35456 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 19:24:57.249522   35456 status.go:422] ha-505269-m03 apiserver status = Running (err=<nil>)
	I0829 19:24:57.249532   35456 status.go:257] ha-505269-m03 status: &{Name:ha-505269-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:24:57.249551   35456 status.go:255] checking status of ha-505269-m04 ...
	I0829 19:24:57.249863   35456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:57.249900   35456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:57.264398   35456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41219
	I0829 19:24:57.264785   35456 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:57.265252   35456 main.go:141] libmachine: Using API Version  1
	I0829 19:24:57.265279   35456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:57.265569   35456 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:57.265769   35456 main.go:141] libmachine: (ha-505269-m04) Calling .GetState
	I0829 19:24:57.267350   35456 status.go:330] ha-505269-m04 host status = "Running" (err=<nil>)
	I0829 19:24:57.267364   35456 host.go:66] Checking if "ha-505269-m04" exists ...
	I0829 19:24:57.267641   35456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:57.267679   35456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:57.282367   35456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39025
	I0829 19:24:57.282783   35456 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:57.283222   35456 main.go:141] libmachine: Using API Version  1
	I0829 19:24:57.283244   35456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:57.283534   35456 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:57.283726   35456 main.go:141] libmachine: (ha-505269-m04) Calling .GetIP
	I0829 19:24:57.286828   35456 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:24:57.287318   35456 main.go:141] libmachine: (ha-505269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:46:e7", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:21:15 +0000 UTC Type:0 Mac:52:54:00:44:46:e7 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-505269-m04 Clientid:01:52:54:00:44:46:e7}
	I0829 19:24:57.287357   35456 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined IP address 192.168.39.101 and MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:24:57.287487   35456 host.go:66] Checking if "ha-505269-m04" exists ...
	I0829 19:24:57.287796   35456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:24:57.287829   35456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:24:57.301943   35456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39597
	I0829 19:24:57.302268   35456 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:24:57.302709   35456 main.go:141] libmachine: Using API Version  1
	I0829 19:24:57.302730   35456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:24:57.302989   35456 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:24:57.303151   35456 main.go:141] libmachine: (ha-505269-m04) Calling .DriverName
	I0829 19:24:57.303323   35456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:24:57.303338   35456 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHHostname
	I0829 19:24:57.305977   35456 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:24:57.306369   35456 main.go:141] libmachine: (ha-505269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:46:e7", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:21:15 +0000 UTC Type:0 Mac:52:54:00:44:46:e7 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-505269-m04 Clientid:01:52:54:00:44:46:e7}
	I0829 19:24:57.306404   35456 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined IP address 192.168.39.101 and MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:24:57.306572   35456 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHPort
	I0829 19:24:57.306741   35456 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHKeyPath
	I0829 19:24:57.306882   35456 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHUsername
	I0829 19:24:57.307007   35456 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m04/id_rsa Username:docker}
	I0829 19:24:57.389752   35456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:24:57.404319   35456 status.go:257] ha-505269-m04 status: &{Name:ha-505269-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-505269 status -v=7 --alsologtostderr: exit status 3 (3.729810432s)

                                                
                                                
-- stdout --
	ha-505269
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-505269-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-505269-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-505269-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:25:00.850363   35572 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:25:00.850663   35572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:25:00.850673   35572 out.go:358] Setting ErrFile to fd 2...
	I0829 19:25:00.850677   35572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:25:00.850851   35572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 19:25:00.851010   35572 out.go:352] Setting JSON to false
	I0829 19:25:00.851032   35572 mustload.go:65] Loading cluster: ha-505269
	I0829 19:25:00.851160   35572 notify.go:220] Checking for updates...
	I0829 19:25:00.851526   35572 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:25:00.851551   35572 status.go:255] checking status of ha-505269 ...
	I0829 19:25:00.852052   35572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:00.852129   35572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:00.870784   35572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34275
	I0829 19:25:00.871259   35572 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:00.871894   35572 main.go:141] libmachine: Using API Version  1
	I0829 19:25:00.871923   35572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:00.872288   35572 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:00.872499   35572 main.go:141] libmachine: (ha-505269) Calling .GetState
	I0829 19:25:00.874063   35572 status.go:330] ha-505269 host status = "Running" (err=<nil>)
	I0829 19:25:00.874077   35572 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:25:00.874377   35572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:00.874431   35572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:00.889094   35572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42155
	I0829 19:25:00.889470   35572 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:00.889875   35572 main.go:141] libmachine: Using API Version  1
	I0829 19:25:00.889893   35572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:00.890184   35572 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:00.890394   35572 main.go:141] libmachine: (ha-505269) Calling .GetIP
	I0829 19:25:00.893101   35572 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:25:00.893527   35572 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:25:00.893567   35572 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:25:00.893736   35572 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:25:00.894068   35572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:00.894101   35572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:00.908425   35572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43343
	I0829 19:25:00.908786   35572 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:00.909167   35572 main.go:141] libmachine: Using API Version  1
	I0829 19:25:00.909187   35572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:00.909464   35572 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:00.909630   35572 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:25:00.909801   35572 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:25:00.909837   35572 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:25:00.912325   35572 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:25:00.912709   35572 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:25:00.912740   35572 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:25:00.912866   35572 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:25:00.913021   35572 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:25:00.913149   35572 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:25:00.913261   35572 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:25:01.002419   35572 ssh_runner.go:195] Run: systemctl --version
	I0829 19:25:01.009623   35572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:25:01.027275   35572 kubeconfig.go:125] found "ha-505269" server: "https://192.168.39.254:8443"
	I0829 19:25:01.027311   35572 api_server.go:166] Checking apiserver status ...
	I0829 19:25:01.027353   35572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:25:01.043715   35572 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1092/cgroup
	W0829 19:25:01.054601   35572 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1092/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:25:01.054655   35572 ssh_runner.go:195] Run: ls
	I0829 19:25:01.059289   35572 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 19:25:01.064999   35572 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 19:25:01.065025   35572 status.go:422] ha-505269 apiserver status = Running (err=<nil>)
	I0829 19:25:01.065034   35572 status.go:257] ha-505269 status: &{Name:ha-505269 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:25:01.065050   35572 status.go:255] checking status of ha-505269-m02 ...
	I0829 19:25:01.065341   35572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:01.065376   35572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:01.079794   35572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34141
	I0829 19:25:01.080244   35572 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:01.080691   35572 main.go:141] libmachine: Using API Version  1
	I0829 19:25:01.080713   35572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:01.081017   35572 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:01.081186   35572 main.go:141] libmachine: (ha-505269-m02) Calling .GetState
	I0829 19:25:01.082695   35572 status.go:330] ha-505269-m02 host status = "Running" (err=<nil>)
	I0829 19:25:01.082712   35572 host.go:66] Checking if "ha-505269-m02" exists ...
	I0829 19:25:01.083101   35572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:01.083142   35572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:01.098332   35572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I0829 19:25:01.098790   35572 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:01.099261   35572 main.go:141] libmachine: Using API Version  1
	I0829 19:25:01.099282   35572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:01.099538   35572 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:01.099716   35572 main.go:141] libmachine: (ha-505269-m02) Calling .GetIP
	I0829 19:25:01.102209   35572 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:25:01.102585   35572 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:25:01.102646   35572 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:25:01.102763   35572 host.go:66] Checking if "ha-505269-m02" exists ...
	I0829 19:25:01.103033   35572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:01.103076   35572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:01.117998   35572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44369
	I0829 19:25:01.118396   35572 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:01.118879   35572 main.go:141] libmachine: Using API Version  1
	I0829 19:25:01.118902   35572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:01.119230   35572 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:01.119420   35572 main.go:141] libmachine: (ha-505269-m02) Calling .DriverName
	I0829 19:25:01.119577   35572 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:25:01.119598   35572 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:25:01.122577   35572 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:25:01.123152   35572 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:25:01.123181   35572 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:25:01.123482   35572 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:25:01.123686   35572 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:25:01.123875   35572 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:25:01.124005   35572 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/id_rsa Username:docker}
	W0829 19:25:04.194761   35572 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	W0829 19:25:04.194878   35572 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	E0829 19:25:04.194899   35572 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0829 19:25:04.194910   35572 status.go:257] ha-505269-m02 status: &{Name:ha-505269-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0829 19:25:04.194927   35572 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0829 19:25:04.194934   35572 status.go:255] checking status of ha-505269-m03 ...
	I0829 19:25:04.195238   35572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:04.195305   35572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:04.209872   35572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42461
	I0829 19:25:04.210273   35572 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:04.210747   35572 main.go:141] libmachine: Using API Version  1
	I0829 19:25:04.210766   35572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:04.211077   35572 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:04.211274   35572 main.go:141] libmachine: (ha-505269-m03) Calling .GetState
	I0829 19:25:04.212742   35572 status.go:330] ha-505269-m03 host status = "Running" (err=<nil>)
	I0829 19:25:04.212756   35572 host.go:66] Checking if "ha-505269-m03" exists ...
	I0829 19:25:04.213031   35572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:04.213069   35572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:04.226968   35572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35333
	I0829 19:25:04.227403   35572 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:04.227887   35572 main.go:141] libmachine: Using API Version  1
	I0829 19:25:04.227906   35572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:04.228206   35572 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:04.228377   35572 main.go:141] libmachine: (ha-505269-m03) Calling .GetIP
	I0829 19:25:04.231067   35572 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:25:04.231424   35572 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:25:04.231447   35572 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:25:04.231612   35572 host.go:66] Checking if "ha-505269-m03" exists ...
	I0829 19:25:04.231943   35572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:04.231981   35572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:04.246189   35572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34143
	I0829 19:25:04.246575   35572 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:04.247031   35572 main.go:141] libmachine: Using API Version  1
	I0829 19:25:04.247052   35572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:04.247373   35572 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:04.247546   35572 main.go:141] libmachine: (ha-505269-m03) Calling .DriverName
	I0829 19:25:04.247727   35572 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:25:04.247747   35572 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:25:04.250099   35572 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:25:04.250480   35572 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:25:04.250510   35572 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:25:04.250639   35572 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:25:04.250802   35572 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:25:04.250946   35572 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:25:04.251108   35572 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/id_rsa Username:docker}
	I0829 19:25:04.331122   35572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:25:04.347847   35572 kubeconfig.go:125] found "ha-505269" server: "https://192.168.39.254:8443"
	I0829 19:25:04.347878   35572 api_server.go:166] Checking apiserver status ...
	I0829 19:25:04.347924   35572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:25:04.361326   35572 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1454/cgroup
	W0829 19:25:04.371475   35572 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1454/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:25:04.371529   35572 ssh_runner.go:195] Run: ls
	I0829 19:25:04.378476   35572 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 19:25:04.384541   35572 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 19:25:04.384569   35572 status.go:422] ha-505269-m03 apiserver status = Running (err=<nil>)
	I0829 19:25:04.384582   35572 status.go:257] ha-505269-m03 status: &{Name:ha-505269-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:25:04.384603   35572 status.go:255] checking status of ha-505269-m04 ...
	I0829 19:25:04.384963   35572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:04.385013   35572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:04.399924   35572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44871
	I0829 19:25:04.400318   35572 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:04.400764   35572 main.go:141] libmachine: Using API Version  1
	I0829 19:25:04.400782   35572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:04.401053   35572 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:04.401216   35572 main.go:141] libmachine: (ha-505269-m04) Calling .GetState
	I0829 19:25:04.402673   35572 status.go:330] ha-505269-m04 host status = "Running" (err=<nil>)
	I0829 19:25:04.402687   35572 host.go:66] Checking if "ha-505269-m04" exists ...
	I0829 19:25:04.403016   35572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:04.403053   35572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:04.417790   35572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39275
	I0829 19:25:04.418210   35572 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:04.418689   35572 main.go:141] libmachine: Using API Version  1
	I0829 19:25:04.418705   35572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:04.418991   35572 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:04.419142   35572 main.go:141] libmachine: (ha-505269-m04) Calling .GetIP
	I0829 19:25:04.421288   35572 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:25:04.421664   35572 main.go:141] libmachine: (ha-505269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:46:e7", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:21:15 +0000 UTC Type:0 Mac:52:54:00:44:46:e7 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-505269-m04 Clientid:01:52:54:00:44:46:e7}
	I0829 19:25:04.421688   35572 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined IP address 192.168.39.101 and MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:25:04.421826   35572 host.go:66] Checking if "ha-505269-m04" exists ...
	I0829 19:25:04.422113   35572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:04.422149   35572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:04.436875   35572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45677
	I0829 19:25:04.437263   35572 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:04.437721   35572 main.go:141] libmachine: Using API Version  1
	I0829 19:25:04.437749   35572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:04.438052   35572 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:04.438240   35572 main.go:141] libmachine: (ha-505269-m04) Calling .DriverName
	I0829 19:25:04.438424   35572 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:25:04.438441   35572 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHHostname
	I0829 19:25:04.441096   35572 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:25:04.441495   35572 main.go:141] libmachine: (ha-505269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:46:e7", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:21:15 +0000 UTC Type:0 Mac:52:54:00:44:46:e7 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-505269-m04 Clientid:01:52:54:00:44:46:e7}
	I0829 19:25:04.441522   35572 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined IP address 192.168.39.101 and MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:25:04.441634   35572 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHPort
	I0829 19:25:04.441769   35572 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHKeyPath
	I0829 19:25:04.441876   35572 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHUsername
	I0829 19:25:04.442021   35572 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m04/id_rsa Username:docker}
	I0829 19:25:04.523748   35572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:25:04.538972   35572 status.go:257] ha-505269-m04 status: &{Name:ha-505269-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-505269 status -v=7 --alsologtostderr: exit status 3 (3.710601192s)

                                                
                                                
-- stdout --
	ha-505269
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-505269-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-505269-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-505269-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:25:08.099634   35674 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:25:08.099754   35674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:25:08.099764   35674 out.go:358] Setting ErrFile to fd 2...
	I0829 19:25:08.099771   35674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:25:08.099944   35674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 19:25:08.100089   35674 out.go:352] Setting JSON to false
	I0829 19:25:08.100111   35674 mustload.go:65] Loading cluster: ha-505269
	I0829 19:25:08.100161   35674 notify.go:220] Checking for updates...
	I0829 19:25:08.100615   35674 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:25:08.100641   35674 status.go:255] checking status of ha-505269 ...
	I0829 19:25:08.101059   35674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:08.101147   35674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:08.119859   35674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35111
	I0829 19:25:08.120287   35674 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:08.120906   35674 main.go:141] libmachine: Using API Version  1
	I0829 19:25:08.120928   35674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:08.121325   35674 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:08.121540   35674 main.go:141] libmachine: (ha-505269) Calling .GetState
	I0829 19:25:08.123267   35674 status.go:330] ha-505269 host status = "Running" (err=<nil>)
	I0829 19:25:08.123285   35674 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:25:08.123706   35674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:08.123751   35674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:08.139231   35674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46095
	I0829 19:25:08.139700   35674 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:08.140241   35674 main.go:141] libmachine: Using API Version  1
	I0829 19:25:08.140260   35674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:08.140575   35674 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:08.140778   35674 main.go:141] libmachine: (ha-505269) Calling .GetIP
	I0829 19:25:08.143524   35674 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:25:08.143927   35674 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:25:08.143960   35674 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:25:08.144054   35674 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:25:08.144339   35674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:08.144376   35674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:08.158508   35674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40349
	I0829 19:25:08.158882   35674 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:08.159357   35674 main.go:141] libmachine: Using API Version  1
	I0829 19:25:08.159374   35674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:08.159663   35674 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:08.159811   35674 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:25:08.159982   35674 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:25:08.160008   35674 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:25:08.162685   35674 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:25:08.163093   35674 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:25:08.163120   35674 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:25:08.163283   35674 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:25:08.163464   35674 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:25:08.163602   35674 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:25:08.163745   35674 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:25:08.246811   35674 ssh_runner.go:195] Run: systemctl --version
	I0829 19:25:08.252947   35674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:25:08.267077   35674 kubeconfig.go:125] found "ha-505269" server: "https://192.168.39.254:8443"
	I0829 19:25:08.267111   35674 api_server.go:166] Checking apiserver status ...
	I0829 19:25:08.267163   35674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:25:08.283906   35674 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1092/cgroup
	W0829 19:25:08.293216   35674 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1092/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:25:08.293280   35674 ssh_runner.go:195] Run: ls
	I0829 19:25:08.297827   35674 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 19:25:08.302182   35674 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 19:25:08.302204   35674 status.go:422] ha-505269 apiserver status = Running (err=<nil>)
	I0829 19:25:08.302213   35674 status.go:257] ha-505269 status: &{Name:ha-505269 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:25:08.302227   35674 status.go:255] checking status of ha-505269-m02 ...
	I0829 19:25:08.302522   35674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:08.302593   35674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:08.318027   35674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38147
	I0829 19:25:08.318462   35674 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:08.318937   35674 main.go:141] libmachine: Using API Version  1
	I0829 19:25:08.318958   35674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:08.319223   35674 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:08.319396   35674 main.go:141] libmachine: (ha-505269-m02) Calling .GetState
	I0829 19:25:08.320672   35674 status.go:330] ha-505269-m02 host status = "Running" (err=<nil>)
	I0829 19:25:08.320686   35674 host.go:66] Checking if "ha-505269-m02" exists ...
	I0829 19:25:08.320980   35674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:08.321010   35674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:08.335127   35674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42825
	I0829 19:25:08.335539   35674 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:08.336001   35674 main.go:141] libmachine: Using API Version  1
	I0829 19:25:08.336030   35674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:08.336346   35674 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:08.336541   35674 main.go:141] libmachine: (ha-505269-m02) Calling .GetIP
	I0829 19:25:08.339563   35674 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:25:08.340024   35674 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:25:08.340049   35674 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:25:08.340202   35674 host.go:66] Checking if "ha-505269-m02" exists ...
	I0829 19:25:08.340507   35674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:08.340541   35674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:08.356039   35674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44579
	I0829 19:25:08.356415   35674 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:08.356981   35674 main.go:141] libmachine: Using API Version  1
	I0829 19:25:08.357002   35674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:08.357302   35674 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:08.357501   35674 main.go:141] libmachine: (ha-505269-m02) Calling .DriverName
	I0829 19:25:08.357686   35674 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:25:08.357707   35674 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:25:08.360528   35674 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:25:08.360910   35674 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:25:08.360935   35674 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:25:08.361096   35674 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:25:08.361262   35674 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:25:08.361428   35674 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:25:08.361565   35674 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/id_rsa Username:docker}
	W0829 19:25:11.426754   35674 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	W0829 19:25:11.426862   35674 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	E0829 19:25:11.426885   35674 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0829 19:25:11.426895   35674 status.go:257] ha-505269-m02 status: &{Name:ha-505269-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0829 19:25:11.426916   35674 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0829 19:25:11.426926   35674 status.go:255] checking status of ha-505269-m03 ...
	I0829 19:25:11.427335   35674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:11.427397   35674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:11.442908   35674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37735
	I0829 19:25:11.443283   35674 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:11.443780   35674 main.go:141] libmachine: Using API Version  1
	I0829 19:25:11.443802   35674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:11.444115   35674 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:11.444335   35674 main.go:141] libmachine: (ha-505269-m03) Calling .GetState
	I0829 19:25:11.445862   35674 status.go:330] ha-505269-m03 host status = "Running" (err=<nil>)
	I0829 19:25:11.445878   35674 host.go:66] Checking if "ha-505269-m03" exists ...
	I0829 19:25:11.446233   35674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:11.446265   35674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:11.460440   35674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42883
	I0829 19:25:11.460786   35674 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:11.461270   35674 main.go:141] libmachine: Using API Version  1
	I0829 19:25:11.461293   35674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:11.461643   35674 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:11.461849   35674 main.go:141] libmachine: (ha-505269-m03) Calling .GetIP
	I0829 19:25:11.465171   35674 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:25:11.465603   35674 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:25:11.465624   35674 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:25:11.465748   35674 host.go:66] Checking if "ha-505269-m03" exists ...
	I0829 19:25:11.466039   35674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:11.466075   35674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:11.480837   35674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46507
	I0829 19:25:11.481156   35674 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:11.481600   35674 main.go:141] libmachine: Using API Version  1
	I0829 19:25:11.481631   35674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:11.481877   35674 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:11.482058   35674 main.go:141] libmachine: (ha-505269-m03) Calling .DriverName
	I0829 19:25:11.482235   35674 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:25:11.482261   35674 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:25:11.484960   35674 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:25:11.485394   35674 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:25:11.485414   35674 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:25:11.485539   35674 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:25:11.485694   35674 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:25:11.485836   35674 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:25:11.485970   35674 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/id_rsa Username:docker}
	I0829 19:25:11.562479   35674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:25:11.579834   35674 kubeconfig.go:125] found "ha-505269" server: "https://192.168.39.254:8443"
	I0829 19:25:11.579860   35674 api_server.go:166] Checking apiserver status ...
	I0829 19:25:11.579889   35674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:25:11.593607   35674 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1454/cgroup
	W0829 19:25:11.603473   35674 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1454/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:25:11.603520   35674 ssh_runner.go:195] Run: ls
	I0829 19:25:11.607765   35674 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 19:25:11.611883   35674 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 19:25:11.611905   35674 status.go:422] ha-505269-m03 apiserver status = Running (err=<nil>)
	I0829 19:25:11.611913   35674 status.go:257] ha-505269-m03 status: &{Name:ha-505269-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:25:11.611926   35674 status.go:255] checking status of ha-505269-m04 ...
	I0829 19:25:11.612266   35674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:11.612298   35674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:11.627348   35674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37141
	I0829 19:25:11.627845   35674 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:11.628319   35674 main.go:141] libmachine: Using API Version  1
	I0829 19:25:11.628339   35674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:11.628656   35674 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:11.628851   35674 main.go:141] libmachine: (ha-505269-m04) Calling .GetState
	I0829 19:25:11.630924   35674 status.go:330] ha-505269-m04 host status = "Running" (err=<nil>)
	I0829 19:25:11.630939   35674 host.go:66] Checking if "ha-505269-m04" exists ...
	I0829 19:25:11.631330   35674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:11.631374   35674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:11.645562   35674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42541
	I0829 19:25:11.646019   35674 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:11.646554   35674 main.go:141] libmachine: Using API Version  1
	I0829 19:25:11.646577   35674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:11.646865   35674 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:11.647038   35674 main.go:141] libmachine: (ha-505269-m04) Calling .GetIP
	I0829 19:25:11.649892   35674 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:25:11.650393   35674 main.go:141] libmachine: (ha-505269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:46:e7", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:21:15 +0000 UTC Type:0 Mac:52:54:00:44:46:e7 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-505269-m04 Clientid:01:52:54:00:44:46:e7}
	I0829 19:25:11.650424   35674 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined IP address 192.168.39.101 and MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:25:11.650591   35674 host.go:66] Checking if "ha-505269-m04" exists ...
	I0829 19:25:11.650916   35674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:11.650947   35674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:11.665497   35674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33083
	I0829 19:25:11.665925   35674 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:11.666445   35674 main.go:141] libmachine: Using API Version  1
	I0829 19:25:11.666465   35674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:11.666844   35674 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:11.667025   35674 main.go:141] libmachine: (ha-505269-m04) Calling .DriverName
	I0829 19:25:11.667190   35674 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:25:11.667207   35674 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHHostname
	I0829 19:25:11.669820   35674 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:25:11.670259   35674 main.go:141] libmachine: (ha-505269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:46:e7", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:21:15 +0000 UTC Type:0 Mac:52:54:00:44:46:e7 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-505269-m04 Clientid:01:52:54:00:44:46:e7}
	I0829 19:25:11.670286   35674 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined IP address 192.168.39.101 and MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:25:11.670468   35674 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHPort
	I0829 19:25:11.670642   35674 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHKeyPath
	I0829 19:25:11.670802   35674 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHUsername
	I0829 19:25:11.670948   35674 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m04/id_rsa Username:docker}
	I0829 19:25:11.753950   35674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:25:11.768637   35674 status.go:257] ha-505269-m04 status: &{Name:ha-505269-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-505269 status -v=7 --alsologtostderr: exit status 7 (636.210798ms)

                                                
                                                
-- stdout --
	ha-505269
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-505269-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-505269-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-505269-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:25:21.716119   35825 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:25:21.716350   35825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:25:21.716357   35825 out.go:358] Setting ErrFile to fd 2...
	I0829 19:25:21.716361   35825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:25:21.716527   35825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 19:25:21.716686   35825 out.go:352] Setting JSON to false
	I0829 19:25:21.716711   35825 mustload.go:65] Loading cluster: ha-505269
	I0829 19:25:21.716759   35825 notify.go:220] Checking for updates...
	I0829 19:25:21.717167   35825 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:25:21.717183   35825 status.go:255] checking status of ha-505269 ...
	I0829 19:25:21.717611   35825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:21.717681   35825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:21.737358   35825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36153
	I0829 19:25:21.737761   35825 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:21.738352   35825 main.go:141] libmachine: Using API Version  1
	I0829 19:25:21.738383   35825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:21.738748   35825 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:21.738960   35825 main.go:141] libmachine: (ha-505269) Calling .GetState
	I0829 19:25:21.740669   35825 status.go:330] ha-505269 host status = "Running" (err=<nil>)
	I0829 19:25:21.740685   35825 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:25:21.740994   35825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:21.741024   35825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:21.755072   35825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40025
	I0829 19:25:21.755470   35825 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:21.755970   35825 main.go:141] libmachine: Using API Version  1
	I0829 19:25:21.755990   35825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:21.756260   35825 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:21.756450   35825 main.go:141] libmachine: (ha-505269) Calling .GetIP
	I0829 19:25:21.759273   35825 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:25:21.759735   35825 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:25:21.759767   35825 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:25:21.759865   35825 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:25:21.760222   35825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:21.760283   35825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:21.774212   35825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33855
	I0829 19:25:21.774582   35825 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:21.774998   35825 main.go:141] libmachine: Using API Version  1
	I0829 19:25:21.775017   35825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:21.775314   35825 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:21.775509   35825 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:25:21.775688   35825 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:25:21.775716   35825 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:25:21.778204   35825 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:25:21.778615   35825 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:25:21.778644   35825 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:25:21.778790   35825 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:25:21.778967   35825 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:25:21.779132   35825 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:25:21.779277   35825 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:25:21.876437   35825 ssh_runner.go:195] Run: systemctl --version
	I0829 19:25:21.882885   35825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:25:21.902138   35825 kubeconfig.go:125] found "ha-505269" server: "https://192.168.39.254:8443"
	I0829 19:25:21.902174   35825 api_server.go:166] Checking apiserver status ...
	I0829 19:25:21.902208   35825 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:25:21.916146   35825 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1092/cgroup
	W0829 19:25:21.926023   35825 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1092/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:25:21.926081   35825 ssh_runner.go:195] Run: ls
	I0829 19:25:21.931127   35825 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 19:25:21.935623   35825 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 19:25:21.935642   35825 status.go:422] ha-505269 apiserver status = Running (err=<nil>)
	I0829 19:25:21.935652   35825 status.go:257] ha-505269 status: &{Name:ha-505269 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:25:21.935666   35825 status.go:255] checking status of ha-505269-m02 ...
	I0829 19:25:21.935954   35825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:21.935983   35825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:21.950693   35825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37201
	I0829 19:25:21.951146   35825 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:21.951582   35825 main.go:141] libmachine: Using API Version  1
	I0829 19:25:21.951600   35825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:21.951947   35825 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:21.952124   35825 main.go:141] libmachine: (ha-505269-m02) Calling .GetState
	I0829 19:25:21.953502   35825 status.go:330] ha-505269-m02 host status = "Stopped" (err=<nil>)
	I0829 19:25:21.953513   35825 status.go:343] host is not running, skipping remaining checks
	I0829 19:25:21.953518   35825 status.go:257] ha-505269-m02 status: &{Name:ha-505269-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:25:21.953539   35825 status.go:255] checking status of ha-505269-m03 ...
	I0829 19:25:21.953804   35825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:21.953833   35825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:21.968852   35825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46755
	I0829 19:25:21.969250   35825 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:21.969720   35825 main.go:141] libmachine: Using API Version  1
	I0829 19:25:21.969747   35825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:21.970107   35825 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:21.970334   35825 main.go:141] libmachine: (ha-505269-m03) Calling .GetState
	I0829 19:25:21.971793   35825 status.go:330] ha-505269-m03 host status = "Running" (err=<nil>)
	I0829 19:25:21.971807   35825 host.go:66] Checking if "ha-505269-m03" exists ...
	I0829 19:25:21.972128   35825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:21.972168   35825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:21.987506   35825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39417
	I0829 19:25:21.987922   35825 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:21.988393   35825 main.go:141] libmachine: Using API Version  1
	I0829 19:25:21.988411   35825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:21.988690   35825 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:21.988880   35825 main.go:141] libmachine: (ha-505269-m03) Calling .GetIP
	I0829 19:25:21.991798   35825 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:25:21.992230   35825 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:25:21.992265   35825 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:25:21.992381   35825 host.go:66] Checking if "ha-505269-m03" exists ...
	I0829 19:25:21.992800   35825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:21.992844   35825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:22.007676   35825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36187
	I0829 19:25:22.008161   35825 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:22.008668   35825 main.go:141] libmachine: Using API Version  1
	I0829 19:25:22.008688   35825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:22.008966   35825 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:22.009145   35825 main.go:141] libmachine: (ha-505269-m03) Calling .DriverName
	I0829 19:25:22.009317   35825 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:25:22.009335   35825 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:25:22.011804   35825 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:25:22.012343   35825 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:25:22.012362   35825 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:25:22.012520   35825 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:25:22.012684   35825 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:25:22.012833   35825 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:25:22.012974   35825 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/id_rsa Username:docker}
	I0829 19:25:22.098022   35825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:25:22.113795   35825 kubeconfig.go:125] found "ha-505269" server: "https://192.168.39.254:8443"
	I0829 19:25:22.113818   35825 api_server.go:166] Checking apiserver status ...
	I0829 19:25:22.113846   35825 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:25:22.133953   35825 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1454/cgroup
	W0829 19:25:22.146685   35825 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1454/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:25:22.146752   35825 ssh_runner.go:195] Run: ls
	I0829 19:25:22.151601   35825 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 19:25:22.155719   35825 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 19:25:22.155738   35825 status.go:422] ha-505269-m03 apiserver status = Running (err=<nil>)
	I0829 19:25:22.155745   35825 status.go:257] ha-505269-m03 status: &{Name:ha-505269-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:25:22.155758   35825 status.go:255] checking status of ha-505269-m04 ...
	I0829 19:25:22.156034   35825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:22.156063   35825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:22.172808   35825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39281
	I0829 19:25:22.173222   35825 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:22.173669   35825 main.go:141] libmachine: Using API Version  1
	I0829 19:25:22.173687   35825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:22.173997   35825 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:22.174198   35825 main.go:141] libmachine: (ha-505269-m04) Calling .GetState
	I0829 19:25:22.175695   35825 status.go:330] ha-505269-m04 host status = "Running" (err=<nil>)
	I0829 19:25:22.175712   35825 host.go:66] Checking if "ha-505269-m04" exists ...
	I0829 19:25:22.176127   35825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:22.176165   35825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:22.191281   35825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36525
	I0829 19:25:22.191716   35825 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:22.192244   35825 main.go:141] libmachine: Using API Version  1
	I0829 19:25:22.192268   35825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:22.192628   35825 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:22.192850   35825 main.go:141] libmachine: (ha-505269-m04) Calling .GetIP
	I0829 19:25:22.196104   35825 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:25:22.196532   35825 main.go:141] libmachine: (ha-505269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:46:e7", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:21:15 +0000 UTC Type:0 Mac:52:54:00:44:46:e7 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-505269-m04 Clientid:01:52:54:00:44:46:e7}
	I0829 19:25:22.196561   35825 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined IP address 192.168.39.101 and MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:25:22.196729   35825 host.go:66] Checking if "ha-505269-m04" exists ...
	I0829 19:25:22.197061   35825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:22.197096   35825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:22.212193   35825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38133
	I0829 19:25:22.212682   35825 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:22.213141   35825 main.go:141] libmachine: Using API Version  1
	I0829 19:25:22.213164   35825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:22.213461   35825 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:22.213685   35825 main.go:141] libmachine: (ha-505269-m04) Calling .DriverName
	I0829 19:25:22.213872   35825 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:25:22.213898   35825 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHHostname
	I0829 19:25:22.216567   35825 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:25:22.216969   35825 main.go:141] libmachine: (ha-505269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:46:e7", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:21:15 +0000 UTC Type:0 Mac:52:54:00:44:46:e7 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-505269-m04 Clientid:01:52:54:00:44:46:e7}
	I0829 19:25:22.216993   35825 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined IP address 192.168.39.101 and MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:25:22.217126   35825 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHPort
	I0829 19:25:22.217274   35825 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHKeyPath
	I0829 19:25:22.217403   35825 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHUsername
	I0829 19:25:22.217528   35825 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m04/id_rsa Username:docker}
	I0829 19:25:22.298145   35825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:25:22.311977   35825 status.go:257] ha-505269-m04 status: &{Name:ha-505269-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-505269 status -v=7 --alsologtostderr: exit status 7 (600.893012ms)

                                                
                                                
-- stdout --
	ha-505269
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-505269-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-505269-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-505269-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:25:29.051537   35913 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:25:29.051789   35913 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:25:29.051798   35913 out.go:358] Setting ErrFile to fd 2...
	I0829 19:25:29.051802   35913 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:25:29.051955   35913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 19:25:29.052116   35913 out.go:352] Setting JSON to false
	I0829 19:25:29.052174   35913 mustload.go:65] Loading cluster: ha-505269
	I0829 19:25:29.052254   35913 notify.go:220] Checking for updates...
	I0829 19:25:29.052639   35913 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:25:29.052659   35913 status.go:255] checking status of ha-505269 ...
	I0829 19:25:29.053093   35913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:29.053142   35913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:29.072420   35913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38577
	I0829 19:25:29.072826   35913 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:29.073372   35913 main.go:141] libmachine: Using API Version  1
	I0829 19:25:29.073392   35913 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:29.073757   35913 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:29.073943   35913 main.go:141] libmachine: (ha-505269) Calling .GetState
	I0829 19:25:29.075532   35913 status.go:330] ha-505269 host status = "Running" (err=<nil>)
	I0829 19:25:29.075549   35913 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:25:29.075865   35913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:29.075899   35913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:29.091209   35913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38127
	I0829 19:25:29.091569   35913 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:29.092055   35913 main.go:141] libmachine: Using API Version  1
	I0829 19:25:29.092079   35913 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:29.092375   35913 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:29.092564   35913 main.go:141] libmachine: (ha-505269) Calling .GetIP
	I0829 19:25:29.095090   35913 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:25:29.095466   35913 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:25:29.095488   35913 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:25:29.095630   35913 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:25:29.095908   35913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:29.095947   35913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:29.109996   35913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33527
	I0829 19:25:29.110346   35913 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:29.110748   35913 main.go:141] libmachine: Using API Version  1
	I0829 19:25:29.110771   35913 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:29.111165   35913 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:29.111363   35913 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:25:29.111566   35913 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:25:29.111586   35913 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:25:29.114262   35913 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:25:29.114651   35913 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:25:29.114685   35913 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:25:29.114810   35913 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:25:29.114974   35913 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:25:29.115122   35913 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:25:29.115249   35913 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:25:29.198297   35913 ssh_runner.go:195] Run: systemctl --version
	I0829 19:25:29.204436   35913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:25:29.218923   35913 kubeconfig.go:125] found "ha-505269" server: "https://192.168.39.254:8443"
	I0829 19:25:29.218963   35913 api_server.go:166] Checking apiserver status ...
	I0829 19:25:29.219001   35913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:25:29.232192   35913 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1092/cgroup
	W0829 19:25:29.241439   35913 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1092/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:25:29.241481   35913 ssh_runner.go:195] Run: ls
	I0829 19:25:29.245879   35913 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 19:25:29.251762   35913 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 19:25:29.251782   35913 status.go:422] ha-505269 apiserver status = Running (err=<nil>)
	I0829 19:25:29.251790   35913 status.go:257] ha-505269 status: &{Name:ha-505269 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:25:29.251804   35913 status.go:255] checking status of ha-505269-m02 ...
	I0829 19:25:29.252130   35913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:29.252163   35913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:29.266748   35913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44533
	I0829 19:25:29.267067   35913 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:29.267464   35913 main.go:141] libmachine: Using API Version  1
	I0829 19:25:29.267481   35913 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:29.267747   35913 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:29.267934   35913 main.go:141] libmachine: (ha-505269-m02) Calling .GetState
	I0829 19:25:29.269387   35913 status.go:330] ha-505269-m02 host status = "Stopped" (err=<nil>)
	I0829 19:25:29.269402   35913 status.go:343] host is not running, skipping remaining checks
	I0829 19:25:29.269409   35913 status.go:257] ha-505269-m02 status: &{Name:ha-505269-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:25:29.269429   35913 status.go:255] checking status of ha-505269-m03 ...
	I0829 19:25:29.269706   35913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:29.269737   35913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:29.285497   35913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45197
	I0829 19:25:29.285833   35913 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:29.286240   35913 main.go:141] libmachine: Using API Version  1
	I0829 19:25:29.286258   35913 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:29.286619   35913 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:29.286797   35913 main.go:141] libmachine: (ha-505269-m03) Calling .GetState
	I0829 19:25:29.288151   35913 status.go:330] ha-505269-m03 host status = "Running" (err=<nil>)
	I0829 19:25:29.288167   35913 host.go:66] Checking if "ha-505269-m03" exists ...
	I0829 19:25:29.288437   35913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:29.288470   35913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:29.302510   35913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34487
	I0829 19:25:29.302902   35913 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:29.303356   35913 main.go:141] libmachine: Using API Version  1
	I0829 19:25:29.303373   35913 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:29.303671   35913 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:29.303832   35913 main.go:141] libmachine: (ha-505269-m03) Calling .GetIP
	I0829 19:25:29.306558   35913 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:25:29.306931   35913 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:25:29.306956   35913 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:25:29.307087   35913 host.go:66] Checking if "ha-505269-m03" exists ...
	I0829 19:25:29.307393   35913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:29.307423   35913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:29.321205   35913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40529
	I0829 19:25:29.321595   35913 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:29.322033   35913 main.go:141] libmachine: Using API Version  1
	I0829 19:25:29.322051   35913 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:29.322377   35913 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:29.322548   35913 main.go:141] libmachine: (ha-505269-m03) Calling .DriverName
	I0829 19:25:29.322723   35913 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:25:29.322742   35913 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:25:29.325192   35913 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:25:29.325680   35913 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:25:29.325710   35913 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:25:29.325827   35913 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:25:29.326006   35913 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:25:29.326153   35913 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:25:29.326275   35913 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/id_rsa Username:docker}
	I0829 19:25:29.402076   35913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:25:29.417401   35913 kubeconfig.go:125] found "ha-505269" server: "https://192.168.39.254:8443"
	I0829 19:25:29.417423   35913 api_server.go:166] Checking apiserver status ...
	I0829 19:25:29.417455   35913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:25:29.433753   35913 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1454/cgroup
	W0829 19:25:29.443795   35913 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1454/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:25:29.443857   35913 ssh_runner.go:195] Run: ls
	I0829 19:25:29.448315   35913 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 19:25:29.452610   35913 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 19:25:29.452628   35913 status.go:422] ha-505269-m03 apiserver status = Running (err=<nil>)
	I0829 19:25:29.452635   35913 status.go:257] ha-505269-m03 status: &{Name:ha-505269-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:25:29.452648   35913 status.go:255] checking status of ha-505269-m04 ...
	I0829 19:25:29.452941   35913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:29.452968   35913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:29.467585   35913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42659
	I0829 19:25:29.467995   35913 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:29.468371   35913 main.go:141] libmachine: Using API Version  1
	I0829 19:25:29.468389   35913 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:29.468698   35913 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:29.468870   35913 main.go:141] libmachine: (ha-505269-m04) Calling .GetState
	I0829 19:25:29.470230   35913 status.go:330] ha-505269-m04 host status = "Running" (err=<nil>)
	I0829 19:25:29.470245   35913 host.go:66] Checking if "ha-505269-m04" exists ...
	I0829 19:25:29.470594   35913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:29.470650   35913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:29.485419   35913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45577
	I0829 19:25:29.485799   35913 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:29.486212   35913 main.go:141] libmachine: Using API Version  1
	I0829 19:25:29.486232   35913 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:29.486493   35913 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:29.486641   35913 main.go:141] libmachine: (ha-505269-m04) Calling .GetIP
	I0829 19:25:29.489102   35913 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:25:29.489492   35913 main.go:141] libmachine: (ha-505269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:46:e7", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:21:15 +0000 UTC Type:0 Mac:52:54:00:44:46:e7 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-505269-m04 Clientid:01:52:54:00:44:46:e7}
	I0829 19:25:29.489536   35913 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined IP address 192.168.39.101 and MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:25:29.489647   35913 host.go:66] Checking if "ha-505269-m04" exists ...
	I0829 19:25:29.489976   35913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:29.490015   35913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:29.507119   35913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37713
	I0829 19:25:29.507505   35913 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:29.507923   35913 main.go:141] libmachine: Using API Version  1
	I0829 19:25:29.507944   35913 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:29.508193   35913 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:29.508318   35913 main.go:141] libmachine: (ha-505269-m04) Calling .DriverName
	I0829 19:25:29.508478   35913 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:25:29.508497   35913 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHHostname
	I0829 19:25:29.510895   35913 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:25:29.511315   35913 main.go:141] libmachine: (ha-505269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:46:e7", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:21:15 +0000 UTC Type:0 Mac:52:54:00:44:46:e7 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-505269-m04 Clientid:01:52:54:00:44:46:e7}
	I0829 19:25:29.511340   35913 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined IP address 192.168.39.101 and MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:25:29.511425   35913 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHPort
	I0829 19:25:29.511591   35913 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHKeyPath
	I0829 19:25:29.511709   35913 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHUsername
	I0829 19:25:29.511812   35913 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m04/id_rsa Username:docker}
	I0829 19:25:29.594202   35913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:25:29.610885   35913 status.go:257] ha-505269-m04 status: &{Name:ha-505269-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-505269 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-505269 -n ha-505269
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-505269 logs -n 25: (1.348729357s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-505269 cp ha-505269-m03:/home/docker/cp-test.txt                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269:/home/docker/cp-test_ha-505269-m03_ha-505269.txt                       |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n ha-505269 sudo cat                                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | /home/docker/cp-test_ha-505269-m03_ha-505269.txt                                 |           |         |         |                     |                     |
	| cp      | ha-505269 cp ha-505269-m03:/home/docker/cp-test.txt                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m02:/home/docker/cp-test_ha-505269-m03_ha-505269-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n ha-505269-m02 sudo cat                                          | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | /home/docker/cp-test_ha-505269-m03_ha-505269-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-505269 cp ha-505269-m03:/home/docker/cp-test.txt                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m04:/home/docker/cp-test_ha-505269-m03_ha-505269-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n ha-505269-m04 sudo cat                                          | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | /home/docker/cp-test_ha-505269-m03_ha-505269-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-505269 cp testdata/cp-test.txt                                                | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-505269 cp ha-505269-m04:/home/docker/cp-test.txt                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3454359662/001/cp-test_ha-505269-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-505269 cp ha-505269-m04:/home/docker/cp-test.txt                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269:/home/docker/cp-test_ha-505269-m04_ha-505269.txt                       |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n ha-505269 sudo cat                                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | /home/docker/cp-test_ha-505269-m04_ha-505269.txt                                 |           |         |         |                     |                     |
	| cp      | ha-505269 cp ha-505269-m04:/home/docker/cp-test.txt                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m02:/home/docker/cp-test_ha-505269-m04_ha-505269-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n ha-505269-m02 sudo cat                                          | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | /home/docker/cp-test_ha-505269-m04_ha-505269-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-505269 cp ha-505269-m04:/home/docker/cp-test.txt                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m03:/home/docker/cp-test_ha-505269-m04_ha-505269-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n ha-505269-m03 sudo cat                                          | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | /home/docker/cp-test_ha-505269-m04_ha-505269-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-505269 node stop m02 -v=7                                                     | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-505269 node start m02 -v=7                                                    | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 19:17:27
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 19:17:27.958759   29935 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:17:27.958993   29935 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:17:27.959001   29935 out.go:358] Setting ErrFile to fd 2...
	I0829 19:17:27.959005   29935 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:17:27.959153   29935 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 19:17:27.959679   29935 out.go:352] Setting JSON to false
	I0829 19:17:27.960463   29935 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3595,"bootTime":1724955453,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 19:17:27.960512   29935 start.go:139] virtualization: kvm guest
	I0829 19:17:27.962717   29935 out.go:177] * [ha-505269] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 19:17:27.964282   29935 out.go:177]   - MINIKUBE_LOCATION=19530
	I0829 19:17:27.964303   29935 notify.go:220] Checking for updates...
	I0829 19:17:27.966723   29935 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:17:27.967724   29935 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 19:17:27.968807   29935 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 19:17:27.969981   29935 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:17:27.971349   29935 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:17:27.972628   29935 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:17:28.006071   29935 out.go:177] * Using the kvm2 driver based on user configuration
	I0829 19:17:28.007378   29935 start.go:297] selected driver: kvm2
	I0829 19:17:28.007392   29935 start.go:901] validating driver "kvm2" against <nil>
	I0829 19:17:28.007402   29935 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:17:28.008073   29935 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:17:28.008132   29935 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19530-11185/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 19:17:28.022521   29935 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 19:17:28.022584   29935 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 19:17:28.022797   29935 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:17:28.022859   29935 cni.go:84] Creating CNI manager for ""
	I0829 19:17:28.022870   29935 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0829 19:17:28.022878   29935 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0829 19:17:28.022930   29935 start.go:340] cluster config:
	{Name:ha-505269 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-505269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0829 19:17:28.023016   29935 iso.go:125] acquiring lock: {Name:mk1c9d3ac7f423dd4657884e37bdf4359f6328d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:17:28.024781   29935 out.go:177] * Starting "ha-505269" primary control-plane node in "ha-505269" cluster
	I0829 19:17:28.025911   29935 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:17:28.025940   29935 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 19:17:28.025948   29935 cache.go:56] Caching tarball of preloaded images
	I0829 19:17:28.026018   29935 preload.go:172] Found /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 19:17:28.026028   29935 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 19:17:28.026375   29935 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/config.json ...
	I0829 19:17:28.026398   29935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/config.json: {Name:mk34432641dc2ac43cd81b2532b21cf90f88ce03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:17:28.026522   29935 start.go:360] acquireMachinesLock for ha-505269: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:17:28.026620   29935 start.go:364] duration metric: took 86.119µs to acquireMachinesLock for "ha-505269"
	I0829 19:17:28.026642   29935 start.go:93] Provisioning new machine with config: &{Name:ha-505269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-505269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:17:28.026696   29935 start.go:125] createHost starting for "" (driver="kvm2")
	I0829 19:17:28.028907   29935 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 19:17:28.029025   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:17:28.029059   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:17:28.043204   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33277
	I0829 19:17:28.043561   29935 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:17:28.044089   29935 main.go:141] libmachine: Using API Version  1
	I0829 19:17:28.044108   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:17:28.044416   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:17:28.044586   29935 main.go:141] libmachine: (ha-505269) Calling .GetMachineName
	I0829 19:17:28.044738   29935 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:17:28.044888   29935 start.go:159] libmachine.API.Create for "ha-505269" (driver="kvm2")
	I0829 19:17:28.044917   29935 client.go:168] LocalClient.Create starting
	I0829 19:17:28.044948   29935 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem
	I0829 19:17:28.044976   29935 main.go:141] libmachine: Decoding PEM data...
	I0829 19:17:28.044990   29935 main.go:141] libmachine: Parsing certificate...
	I0829 19:17:28.045045   29935 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem
	I0829 19:17:28.045069   29935 main.go:141] libmachine: Decoding PEM data...
	I0829 19:17:28.045080   29935 main.go:141] libmachine: Parsing certificate...
	I0829 19:17:28.045098   29935 main.go:141] libmachine: Running pre-create checks...
	I0829 19:17:28.045106   29935 main.go:141] libmachine: (ha-505269) Calling .PreCreateCheck
	I0829 19:17:28.045448   29935 main.go:141] libmachine: (ha-505269) Calling .GetConfigRaw
	I0829 19:17:28.045804   29935 main.go:141] libmachine: Creating machine...
	I0829 19:17:28.045815   29935 main.go:141] libmachine: (ha-505269) Calling .Create
	I0829 19:17:28.045939   29935 main.go:141] libmachine: (ha-505269) Creating KVM machine...
	I0829 19:17:28.047213   29935 main.go:141] libmachine: (ha-505269) DBG | found existing default KVM network
	I0829 19:17:28.047812   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:28.047704   29958 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0829 19:17:28.047885   29935 main.go:141] libmachine: (ha-505269) DBG | created network xml: 
	I0829 19:17:28.047908   29935 main.go:141] libmachine: (ha-505269) DBG | <network>
	I0829 19:17:28.047921   29935 main.go:141] libmachine: (ha-505269) DBG |   <name>mk-ha-505269</name>
	I0829 19:17:28.047940   29935 main.go:141] libmachine: (ha-505269) DBG |   <dns enable='no'/>
	I0829 19:17:28.047949   29935 main.go:141] libmachine: (ha-505269) DBG |   
	I0829 19:17:28.047971   29935 main.go:141] libmachine: (ha-505269) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0829 19:17:28.047982   29935 main.go:141] libmachine: (ha-505269) DBG |     <dhcp>
	I0829 19:17:28.047993   29935 main.go:141] libmachine: (ha-505269) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0829 19:17:28.048004   29935 main.go:141] libmachine: (ha-505269) DBG |     </dhcp>
	I0829 19:17:28.048016   29935 main.go:141] libmachine: (ha-505269) DBG |   </ip>
	I0829 19:17:28.048030   29935 main.go:141] libmachine: (ha-505269) DBG |   
	I0829 19:17:28.048038   29935 main.go:141] libmachine: (ha-505269) DBG | </network>
	I0829 19:17:28.048043   29935 main.go:141] libmachine: (ha-505269) DBG | 
	I0829 19:17:28.052763   29935 main.go:141] libmachine: (ha-505269) DBG | trying to create private KVM network mk-ha-505269 192.168.39.0/24...
	I0829 19:17:28.114139   29935 main.go:141] libmachine: (ha-505269) Setting up store path in /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269 ...
	I0829 19:17:28.114162   29935 main.go:141] libmachine: (ha-505269) DBG | private KVM network mk-ha-505269 192.168.39.0/24 created
	I0829 19:17:28.114170   29935 main.go:141] libmachine: (ha-505269) Building disk image from file:///home/jenkins/minikube-integration/19530-11185/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso
	I0829 19:17:28.114185   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:28.114084   29958 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 19:17:28.114275   29935 main.go:141] libmachine: (ha-505269) Downloading /home/jenkins/minikube-integration/19530-11185/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19530-11185/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso...
	I0829 19:17:28.356256   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:28.356158   29958 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa...
	I0829 19:17:28.649996   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:28.649851   29958 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/ha-505269.rawdisk...
	I0829 19:17:28.650032   29935 main.go:141] libmachine: (ha-505269) DBG | Writing magic tar header
	I0829 19:17:28.650043   29935 main.go:141] libmachine: (ha-505269) DBG | Writing SSH key tar header
	I0829 19:17:28.650059   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:28.649976   29958 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269 ...
	I0829 19:17:28.650078   29935 main.go:141] libmachine: (ha-505269) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269
	I0829 19:17:28.650107   29935 main.go:141] libmachine: (ha-505269) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269 (perms=drwx------)
	I0829 19:17:28.650122   29935 main.go:141] libmachine: (ha-505269) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube/machines
	I0829 19:17:28.650133   29935 main.go:141] libmachine: (ha-505269) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube/machines (perms=drwxr-xr-x)
	I0829 19:17:28.650143   29935 main.go:141] libmachine: (ha-505269) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube (perms=drwxr-xr-x)
	I0829 19:17:28.650153   29935 main.go:141] libmachine: (ha-505269) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185 (perms=drwxrwxr-x)
	I0829 19:17:28.650161   29935 main.go:141] libmachine: (ha-505269) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0829 19:17:28.650172   29935 main.go:141] libmachine: (ha-505269) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0829 19:17:28.650183   29935 main.go:141] libmachine: (ha-505269) Creating domain...
	I0829 19:17:28.650240   29935 main.go:141] libmachine: (ha-505269) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 19:17:28.650266   29935 main.go:141] libmachine: (ha-505269) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185
	I0829 19:17:28.650283   29935 main.go:141] libmachine: (ha-505269) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0829 19:17:28.650294   29935 main.go:141] libmachine: (ha-505269) DBG | Checking permissions on dir: /home/jenkins
	I0829 19:17:28.650306   29935 main.go:141] libmachine: (ha-505269) DBG | Checking permissions on dir: /home
	I0829 19:17:28.650316   29935 main.go:141] libmachine: (ha-505269) DBG | Skipping /home - not owner
	I0829 19:17:28.651241   29935 main.go:141] libmachine: (ha-505269) define libvirt domain using xml: 
	I0829 19:17:28.651257   29935 main.go:141] libmachine: (ha-505269) <domain type='kvm'>
	I0829 19:17:28.651277   29935 main.go:141] libmachine: (ha-505269)   <name>ha-505269</name>
	I0829 19:17:28.651292   29935 main.go:141] libmachine: (ha-505269)   <memory unit='MiB'>2200</memory>
	I0829 19:17:28.651313   29935 main.go:141] libmachine: (ha-505269)   <vcpu>2</vcpu>
	I0829 19:17:28.651322   29935 main.go:141] libmachine: (ha-505269)   <features>
	I0829 19:17:28.651328   29935 main.go:141] libmachine: (ha-505269)     <acpi/>
	I0829 19:17:28.651331   29935 main.go:141] libmachine: (ha-505269)     <apic/>
	I0829 19:17:28.651336   29935 main.go:141] libmachine: (ha-505269)     <pae/>
	I0829 19:17:28.651344   29935 main.go:141] libmachine: (ha-505269)     
	I0829 19:17:28.651350   29935 main.go:141] libmachine: (ha-505269)   </features>
	I0829 19:17:28.651354   29935 main.go:141] libmachine: (ha-505269)   <cpu mode='host-passthrough'>
	I0829 19:17:28.651361   29935 main.go:141] libmachine: (ha-505269)   
	I0829 19:17:28.651370   29935 main.go:141] libmachine: (ha-505269)   </cpu>
	I0829 19:17:28.651392   29935 main.go:141] libmachine: (ha-505269)   <os>
	I0829 19:17:28.651414   29935 main.go:141] libmachine: (ha-505269)     <type>hvm</type>
	I0829 19:17:28.651444   29935 main.go:141] libmachine: (ha-505269)     <boot dev='cdrom'/>
	I0829 19:17:28.651464   29935 main.go:141] libmachine: (ha-505269)     <boot dev='hd'/>
	I0829 19:17:28.651478   29935 main.go:141] libmachine: (ha-505269)     <bootmenu enable='no'/>
	I0829 19:17:28.651486   29935 main.go:141] libmachine: (ha-505269)   </os>
	I0829 19:17:28.651497   29935 main.go:141] libmachine: (ha-505269)   <devices>
	I0829 19:17:28.651506   29935 main.go:141] libmachine: (ha-505269)     <disk type='file' device='cdrom'>
	I0829 19:17:28.651519   29935 main.go:141] libmachine: (ha-505269)       <source file='/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/boot2docker.iso'/>
	I0829 19:17:28.651530   29935 main.go:141] libmachine: (ha-505269)       <target dev='hdc' bus='scsi'/>
	I0829 19:17:28.651543   29935 main.go:141] libmachine: (ha-505269)       <readonly/>
	I0829 19:17:28.651556   29935 main.go:141] libmachine: (ha-505269)     </disk>
	I0829 19:17:28.651573   29935 main.go:141] libmachine: (ha-505269)     <disk type='file' device='disk'>
	I0829 19:17:28.651588   29935 main.go:141] libmachine: (ha-505269)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0829 19:17:28.651605   29935 main.go:141] libmachine: (ha-505269)       <source file='/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/ha-505269.rawdisk'/>
	I0829 19:17:28.651616   29935 main.go:141] libmachine: (ha-505269)       <target dev='hda' bus='virtio'/>
	I0829 19:17:28.651626   29935 main.go:141] libmachine: (ha-505269)     </disk>
	I0829 19:17:28.651637   29935 main.go:141] libmachine: (ha-505269)     <interface type='network'>
	I0829 19:17:28.651649   29935 main.go:141] libmachine: (ha-505269)       <source network='mk-ha-505269'/>
	I0829 19:17:28.651657   29935 main.go:141] libmachine: (ha-505269)       <model type='virtio'/>
	I0829 19:17:28.651669   29935 main.go:141] libmachine: (ha-505269)     </interface>
	I0829 19:17:28.651680   29935 main.go:141] libmachine: (ha-505269)     <interface type='network'>
	I0829 19:17:28.651691   29935 main.go:141] libmachine: (ha-505269)       <source network='default'/>
	I0829 19:17:28.651701   29935 main.go:141] libmachine: (ha-505269)       <model type='virtio'/>
	I0829 19:17:28.651711   29935 main.go:141] libmachine: (ha-505269)     </interface>
	I0829 19:17:28.651722   29935 main.go:141] libmachine: (ha-505269)     <serial type='pty'>
	I0829 19:17:28.651736   29935 main.go:141] libmachine: (ha-505269)       <target port='0'/>
	I0829 19:17:28.651751   29935 main.go:141] libmachine: (ha-505269)     </serial>
	I0829 19:17:28.651762   29935 main.go:141] libmachine: (ha-505269)     <console type='pty'>
	I0829 19:17:28.651773   29935 main.go:141] libmachine: (ha-505269)       <target type='serial' port='0'/>
	I0829 19:17:28.651783   29935 main.go:141] libmachine: (ha-505269)     </console>
	I0829 19:17:28.651791   29935 main.go:141] libmachine: (ha-505269)     <rng model='virtio'>
	I0829 19:17:28.651803   29935 main.go:141] libmachine: (ha-505269)       <backend model='random'>/dev/random</backend>
	I0829 19:17:28.651818   29935 main.go:141] libmachine: (ha-505269)     </rng>
	I0829 19:17:28.651832   29935 main.go:141] libmachine: (ha-505269)     
	I0829 19:17:28.651848   29935 main.go:141] libmachine: (ha-505269)     
	I0829 19:17:28.651862   29935 main.go:141] libmachine: (ha-505269)   </devices>
	I0829 19:17:28.651877   29935 main.go:141] libmachine: (ha-505269) </domain>
	I0829 19:17:28.651896   29935 main.go:141] libmachine: (ha-505269) 
	I0829 19:17:28.655845   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:85:87:73 in network default
	I0829 19:17:28.656347   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:28.656364   29935 main.go:141] libmachine: (ha-505269) Ensuring networks are active...
	I0829 19:17:28.657024   29935 main.go:141] libmachine: (ha-505269) Ensuring network default is active
	I0829 19:17:28.657477   29935 main.go:141] libmachine: (ha-505269) Ensuring network mk-ha-505269 is active
	I0829 19:17:28.657900   29935 main.go:141] libmachine: (ha-505269) Getting domain xml...
	I0829 19:17:28.658560   29935 main.go:141] libmachine: (ha-505269) Creating domain...
	I0829 19:17:29.824690   29935 main.go:141] libmachine: (ha-505269) Waiting to get IP...
	I0829 19:17:29.825538   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:29.825902   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find current IP address of domain ha-505269 in network mk-ha-505269
	I0829 19:17:29.825944   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:29.825894   29958 retry.go:31] will retry after 209.089865ms: waiting for machine to come up
	I0829 19:17:30.036215   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:30.036692   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find current IP address of domain ha-505269 in network mk-ha-505269
	I0829 19:17:30.036725   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:30.036637   29958 retry.go:31] will retry after 385.664286ms: waiting for machine to come up
	I0829 19:17:30.424200   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:30.424631   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find current IP address of domain ha-505269 in network mk-ha-505269
	I0829 19:17:30.424657   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:30.424594   29958 retry.go:31] will retry after 332.943452ms: waiting for machine to come up
	I0829 19:17:30.759309   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:30.759697   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find current IP address of domain ha-505269 in network mk-ha-505269
	I0829 19:17:30.759749   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:30.759675   29958 retry.go:31] will retry after 551.728849ms: waiting for machine to come up
	I0829 19:17:31.313333   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:31.313786   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find current IP address of domain ha-505269 in network mk-ha-505269
	I0829 19:17:31.313819   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:31.313733   29958 retry.go:31] will retry after 590.108729ms: waiting for machine to come up
	I0829 19:17:31.905369   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:31.905777   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find current IP address of domain ha-505269 in network mk-ha-505269
	I0829 19:17:31.905808   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:31.905700   29958 retry.go:31] will retry after 758.24211ms: waiting for machine to come up
	I0829 19:17:32.665089   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:32.665517   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find current IP address of domain ha-505269 in network mk-ha-505269
	I0829 19:17:32.665537   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:32.665487   29958 retry.go:31] will retry after 1.1487724s: waiting for machine to come up
	I0829 19:17:33.815411   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:33.815895   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find current IP address of domain ha-505269 in network mk-ha-505269
	I0829 19:17:33.815922   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:33.815813   29958 retry.go:31] will retry after 1.369495463s: waiting for machine to come up
	I0829 19:17:35.187412   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:35.187770   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find current IP address of domain ha-505269 in network mk-ha-505269
	I0829 19:17:35.187797   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:35.187722   29958 retry.go:31] will retry after 1.413323486s: waiting for machine to come up
	I0829 19:17:36.602212   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:36.602607   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find current IP address of domain ha-505269 in network mk-ha-505269
	I0829 19:17:36.602630   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:36.602569   29958 retry.go:31] will retry after 1.621601438s: waiting for machine to come up
	I0829 19:17:38.226589   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:38.227022   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find current IP address of domain ha-505269 in network mk-ha-505269
	I0829 19:17:38.227043   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:38.226989   29958 retry.go:31] will retry after 2.51318315s: waiting for machine to come up
	I0829 19:17:40.742522   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:40.742929   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find current IP address of domain ha-505269 in network mk-ha-505269
	I0829 19:17:40.742959   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:40.742879   29958 retry.go:31] will retry after 2.859959482s: waiting for machine to come up
	I0829 19:17:43.604815   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:43.605190   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find current IP address of domain ha-505269 in network mk-ha-505269
	I0829 19:17:43.605218   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:43.605152   29958 retry.go:31] will retry after 3.832874093s: waiting for machine to come up
	I0829 19:17:47.439131   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:47.439478   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find current IP address of domain ha-505269 in network mk-ha-505269
	I0829 19:17:47.439500   29935 main.go:141] libmachine: (ha-505269) DBG | I0829 19:17:47.439441   29958 retry.go:31] will retry after 3.719809687s: waiting for machine to come up
	I0829 19:17:51.162936   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:51.163407   29935 main.go:141] libmachine: (ha-505269) Found IP for machine: 192.168.39.56
	I0829 19:17:51.163420   29935 main.go:141] libmachine: (ha-505269) Reserving static IP address...
	I0829 19:17:51.163429   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has current primary IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:51.163727   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find host DHCP lease matching {name: "ha-505269", mac: "52:54:00:5e:63:25", ip: "192.168.39.56"} in network mk-ha-505269
	I0829 19:17:51.232076   29935 main.go:141] libmachine: (ha-505269) DBG | Getting to WaitForSSH function...
	I0829 19:17:51.232105   29935 main.go:141] libmachine: (ha-505269) Reserved static IP address: 192.168.39.56
	I0829 19:17:51.232124   29935 main.go:141] libmachine: (ha-505269) Waiting for SSH to be available...
	I0829 19:17:51.234364   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:51.234790   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269
	I0829 19:17:51.234814   29935 main.go:141] libmachine: (ha-505269) DBG | unable to find defined IP address of network mk-ha-505269 interface with MAC address 52:54:00:5e:63:25
	I0829 19:17:51.234892   29935 main.go:141] libmachine: (ha-505269) DBG | Using SSH client type: external
	I0829 19:17:51.234913   29935 main.go:141] libmachine: (ha-505269) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa (-rw-------)
	I0829 19:17:51.234981   29935 main.go:141] libmachine: (ha-505269) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:17:51.234995   29935 main.go:141] libmachine: (ha-505269) DBG | About to run SSH command:
	I0829 19:17:51.235005   29935 main.go:141] libmachine: (ha-505269) DBG | exit 0
	I0829 19:17:51.238355   29935 main.go:141] libmachine: (ha-505269) DBG | SSH cmd err, output: exit status 255: 
	I0829 19:17:51.238377   29935 main.go:141] libmachine: (ha-505269) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0829 19:17:51.238388   29935 main.go:141] libmachine: (ha-505269) DBG | command : exit 0
	I0829 19:17:51.238402   29935 main.go:141] libmachine: (ha-505269) DBG | err     : exit status 255
	I0829 19:17:51.238422   29935 main.go:141] libmachine: (ha-505269) DBG | output  : 
	I0829 19:17:54.239544   29935 main.go:141] libmachine: (ha-505269) DBG | Getting to WaitForSSH function...
	I0829 19:17:54.241704   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.242064   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:54.242092   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.242190   29935 main.go:141] libmachine: (ha-505269) DBG | Using SSH client type: external
	I0829 19:17:54.242214   29935 main.go:141] libmachine: (ha-505269) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa (-rw-------)
	I0829 19:17:54.242252   29935 main.go:141] libmachine: (ha-505269) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.56 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:17:54.242265   29935 main.go:141] libmachine: (ha-505269) DBG | About to run SSH command:
	I0829 19:17:54.242276   29935 main.go:141] libmachine: (ha-505269) DBG | exit 0
	I0829 19:17:54.370601   29935 main.go:141] libmachine: (ha-505269) DBG | SSH cmd err, output: <nil>: 
	I0829 19:17:54.370834   29935 main.go:141] libmachine: (ha-505269) KVM machine creation complete!
	I0829 19:17:54.371212   29935 main.go:141] libmachine: (ha-505269) Calling .GetConfigRaw
	I0829 19:17:54.371764   29935 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:17:54.371959   29935 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:17:54.372177   29935 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0829 19:17:54.372192   29935 main.go:141] libmachine: (ha-505269) Calling .GetState
	I0829 19:17:54.373369   29935 main.go:141] libmachine: Detecting operating system of created instance...
	I0829 19:17:54.373391   29935 main.go:141] libmachine: Waiting for SSH to be available...
	I0829 19:17:54.373399   29935 main.go:141] libmachine: Getting to WaitForSSH function...
	I0829 19:17:54.373410   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:17:54.375532   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.375839   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:54.375877   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.375995   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:17:54.376146   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:54.376281   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:54.376384   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:17:54.376588   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:17:54.376779   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0829 19:17:54.376792   29935 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0829 19:17:54.489891   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:17:54.489914   29935 main.go:141] libmachine: Detecting the provisioner...
	I0829 19:17:54.489922   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:17:54.492480   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.492755   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:54.492781   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.492925   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:17:54.493108   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:54.493265   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:54.493413   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:17:54.493580   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:17:54.493767   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0829 19:17:54.493778   29935 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0829 19:17:54.607475   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0829 19:17:54.607587   29935 main.go:141] libmachine: found compatible host: buildroot
	I0829 19:17:54.607601   29935 main.go:141] libmachine: Provisioning with buildroot...
	I0829 19:17:54.607612   29935 main.go:141] libmachine: (ha-505269) Calling .GetMachineName
	I0829 19:17:54.607844   29935 buildroot.go:166] provisioning hostname "ha-505269"
	I0829 19:17:54.607866   29935 main.go:141] libmachine: (ha-505269) Calling .GetMachineName
	I0829 19:17:54.608055   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:17:54.610330   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.610666   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:54.610687   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.610815   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:17:54.610967   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:54.611127   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:54.611243   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:17:54.611365   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:17:54.611529   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0829 19:17:54.611540   29935 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-505269 && echo "ha-505269" | sudo tee /etc/hostname
	I0829 19:17:54.736836   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-505269
	
	I0829 19:17:54.736866   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:17:54.739230   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.739526   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:54.739570   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.739697   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:17:54.739878   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:54.740044   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:54.740185   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:17:54.740324   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:17:54.740515   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0829 19:17:54.740532   29935 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-505269' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-505269/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-505269' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:17:54.859355   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:17:54.859408   29935 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 19:17:54.859433   29935 buildroot.go:174] setting up certificates
	I0829 19:17:54.859442   29935 provision.go:84] configureAuth start
	I0829 19:17:54.859451   29935 main.go:141] libmachine: (ha-505269) Calling .GetMachineName
	I0829 19:17:54.859732   29935 main.go:141] libmachine: (ha-505269) Calling .GetIP
	I0829 19:17:54.862498   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.862854   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:54.862876   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.863028   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:17:54.865121   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.865469   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:54.865494   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.865593   29935 provision.go:143] copyHostCerts
	I0829 19:17:54.865624   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 19:17:54.865658   29935 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 19:17:54.865674   29935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 19:17:54.865751   29935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 19:17:54.865847   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 19:17:54.865866   29935 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 19:17:54.865873   29935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 19:17:54.865898   29935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 19:17:54.865955   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 19:17:54.865977   29935 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 19:17:54.865983   29935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 19:17:54.866005   29935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 19:17:54.866056   29935 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.ha-505269 san=[127.0.0.1 192.168.39.56 ha-505269 localhost minikube]
	I0829 19:17:54.994896   29935 provision.go:177] copyRemoteCerts
	I0829 19:17:54.994948   29935 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:17:54.994969   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:17:54.997280   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.997563   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:54.997581   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:54.997741   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:17:54.997908   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:54.998043   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:17:54.998144   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:17:55.084371   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0829 19:17:55.084440   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 19:17:55.108256   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0829 19:17:55.108346   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0829 19:17:55.132778   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0829 19:17:55.132866   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 19:17:55.156154   29935 provision.go:87] duration metric: took 296.700657ms to configureAuth
	I0829 19:17:55.156184   29935 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:17:55.156382   29935 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:17:55.156496   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:17:55.158891   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.159239   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:55.159266   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.159388   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:17:55.159543   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:55.159709   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:55.159825   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:17:55.159969   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:17:55.160113   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0829 19:17:55.160129   29935 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:17:55.381545   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:17:55.381571   29935 main.go:141] libmachine: Checking connection to Docker...
	I0829 19:17:55.381578   29935 main.go:141] libmachine: (ha-505269) Calling .GetURL
	I0829 19:17:55.382693   29935 main.go:141] libmachine: (ha-505269) DBG | Using libvirt version 6000000
	I0829 19:17:55.384881   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.385204   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:55.385229   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.385420   29935 main.go:141] libmachine: Docker is up and running!
	I0829 19:17:55.385431   29935 main.go:141] libmachine: Reticulating splines...
	I0829 19:17:55.385436   29935 client.go:171] duration metric: took 27.340514063s to LocalClient.Create
	I0829 19:17:55.385457   29935 start.go:167] duration metric: took 27.340568977s to libmachine.API.Create "ha-505269"
	I0829 19:17:55.385470   29935 start.go:293] postStartSetup for "ha-505269" (driver="kvm2")
	I0829 19:17:55.385482   29935 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:17:55.385497   29935 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:17:55.385708   29935 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:17:55.385730   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:17:55.387943   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.388294   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:55.388322   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.388450   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:17:55.388625   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:55.388773   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:17:55.388901   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:17:55.472984   29935 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:17:55.477127   29935 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:17:55.477153   29935 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 19:17:55.477212   29935 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 19:17:55.477301   29935 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 19:17:55.477314   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> /etc/ssl/certs/183612.pem
	I0829 19:17:55.477442   29935 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:17:55.486988   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 19:17:55.510633   29935 start.go:296] duration metric: took 125.149206ms for postStartSetup
	I0829 19:17:55.510691   29935 main.go:141] libmachine: (ha-505269) Calling .GetConfigRaw
	I0829 19:17:55.511257   29935 main.go:141] libmachine: (ha-505269) Calling .GetIP
	I0829 19:17:55.513588   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.513891   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:55.513924   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.514170   29935 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/config.json ...
	I0829 19:17:55.514363   29935 start.go:128] duration metric: took 27.487658045s to createHost
	I0829 19:17:55.514392   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:17:55.516340   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.516606   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:55.516628   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.516776   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:17:55.516934   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:55.517088   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:55.517212   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:17:55.517359   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:17:55.517516   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0829 19:17:55.517525   29935 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:17:55.631228   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724959075.612760705
	
	I0829 19:17:55.631253   29935 fix.go:216] guest clock: 1724959075.612760705
	I0829 19:17:55.631263   29935 fix.go:229] Guest: 2024-08-29 19:17:55.612760705 +0000 UTC Remote: 2024-08-29 19:17:55.514381393 +0000 UTC m=+27.588833608 (delta=98.379312ms)
	I0829 19:17:55.631284   29935 fix.go:200] guest clock delta is within tolerance: 98.379312ms
	I0829 19:17:55.631294   29935 start.go:83] releasing machines lock for "ha-505269", held for 27.604662263s
	I0829 19:17:55.631312   29935 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:17:55.631547   29935 main.go:141] libmachine: (ha-505269) Calling .GetIP
	I0829 19:17:55.634015   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.634362   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:55.634391   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.634548   29935 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:17:55.635006   29935 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:17:55.635169   29935 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:17:55.635232   29935 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:17:55.635270   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:17:55.635375   29935 ssh_runner.go:195] Run: cat /version.json
	I0829 19:17:55.635389   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:17:55.637698   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.638012   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:55.638046   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.638078   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.638187   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:17:55.638343   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:55.638466   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:17:55.638496   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:55.638521   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:55.638612   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:17:55.638667   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:17:55.638828   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:17:55.638982   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:17:55.639103   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:17:55.739321   29935 ssh_runner.go:195] Run: systemctl --version
	I0829 19:17:55.745202   29935 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:17:55.906765   29935 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:17:55.912628   29935 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:17:55.912700   29935 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:17:55.932034   29935 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:17:55.932058   29935 start.go:495] detecting cgroup driver to use...
	I0829 19:17:55.932112   29935 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:17:55.950478   29935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:17:55.964970   29935 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:17:55.965046   29935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:17:55.978970   29935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:17:55.992754   29935 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:17:56.109240   29935 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:17:56.268373   29935 docker.go:233] disabling docker service ...
	I0829 19:17:56.268442   29935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:17:56.282829   29935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:17:56.295586   29935 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:17:56.424098   29935 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:17:56.547017   29935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:17:56.560904   29935 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:17:56.579199   29935 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:17:56.579264   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:17:56.589967   29935 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:17:56.590032   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:17:56.600618   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:17:56.611011   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:17:56.621958   29935 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:17:56.632864   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:17:56.643306   29935 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:17:56.660236   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:17:56.670897   29935 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:17:56.680531   29935 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:17:56.680589   29935 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:17:56.694018   29935 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:17:56.703828   29935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:17:56.825049   29935 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:17:56.917042   29935 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:17:56.917122   29935 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:17:56.921912   29935 start.go:563] Will wait 60s for crictl version
	I0829 19:17:56.921962   29935 ssh_runner.go:195] Run: which crictl
	I0829 19:17:56.925614   29935 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:17:56.964597   29935 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:17:56.964676   29935 ssh_runner.go:195] Run: crio --version
	I0829 19:17:56.992131   29935 ssh_runner.go:195] Run: crio --version
	I0829 19:17:57.023140   29935 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:17:57.024648   29935 main.go:141] libmachine: (ha-505269) Calling .GetIP
	I0829 19:17:57.027385   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:57.027744   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:17:57.027772   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:17:57.027913   29935 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 19:17:57.032162   29935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:17:57.046050   29935 kubeadm.go:883] updating cluster {Name:ha-505269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-505269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:17:57.046269   29935 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:17:57.046515   29935 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:17:57.078217   29935 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 19:17:57.078284   29935 ssh_runner.go:195] Run: which lz4
	I0829 19:17:57.082210   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0829 19:17:57.082290   29935 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 19:17:57.086414   29935 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 19:17:57.086436   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 19:17:58.424209   29935 crio.go:462] duration metric: took 1.341938036s to copy over tarball
	I0829 19:17:58.424290   29935 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 19:18:00.437807   29935 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.013487447s)
	I0829 19:18:00.437843   29935 crio.go:469] duration metric: took 2.013606568s to extract the tarball
	I0829 19:18:00.437852   29935 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 19:18:00.474664   29935 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:18:00.521002   29935 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:18:00.521024   29935 cache_images.go:84] Images are preloaded, skipping loading
	I0829 19:18:00.521031   29935 kubeadm.go:934] updating node { 192.168.39.56 8443 v1.31.0 crio true true} ...
	I0829 19:18:00.521160   29935 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-505269 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-505269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:18:00.521257   29935 ssh_runner.go:195] Run: crio config
	I0829 19:18:00.565829   29935 cni.go:84] Creating CNI manager for ""
	I0829 19:18:00.565849   29935 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0829 19:18:00.565864   29935 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:18:00.565894   29935 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.56 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-505269 NodeName:ha-505269 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:18:00.566069   29935 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.56
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-505269"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.56
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.56"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:18:00.566098   29935 kube-vip.go:115] generating kube-vip config ...
	I0829 19:18:00.566150   29935 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0829 19:18:00.584228   29935 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0829 19:18:00.584340   29935 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0829 19:18:00.584392   29935 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:18:00.594198   29935 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:18:00.594248   29935 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0829 19:18:00.603461   29935 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0829 19:18:00.619940   29935 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:18:00.638970   29935 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0829 19:18:00.655996   29935 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0829 19:18:00.672958   29935 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0829 19:18:00.677018   29935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:18:00.688883   29935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:18:00.800177   29935 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:18:00.816756   29935 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269 for IP: 192.168.39.56
	I0829 19:18:00.816778   29935 certs.go:194] generating shared ca certs ...
	I0829 19:18:00.816791   29935 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:18:00.816957   29935 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 19:18:00.817019   29935 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 19:18:00.817033   29935 certs.go:256] generating profile certs ...
	I0829 19:18:00.817083   29935 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.key
	I0829 19:18:00.817110   29935 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.crt with IP's: []
	I0829 19:18:00.940108   29935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.crt ...
	I0829 19:18:00.940131   29935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.crt: {Name:mk431a4ed0d72f13a92734082de436c232306a7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:18:00.940319   29935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.key ...
	I0829 19:18:00.940335   29935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.key: {Name:mk4d31a534edf74fc14738154db3aebf4d68236c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:18:00.940435   29935 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.ca1201dc
	I0829 19:18:00.940455   29935 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.ca1201dc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.56 192.168.39.254]
	I0829 19:18:01.119912   29935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.ca1201dc ...
	I0829 19:18:01.119941   29935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.ca1201dc: {Name:mk6b841224bb564430ad1d214971521b1b1d96df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:18:01.120116   29935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.ca1201dc ...
	I0829 19:18:01.120131   29935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.ca1201dc: {Name:mkaef83e7776af706be997c5d3daca14b348913a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:18:01.120228   29935 certs.go:381] copying /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.ca1201dc -> /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt
	I0829 19:18:01.120345   29935 certs.go:385] copying /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.ca1201dc -> /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key
	I0829 19:18:01.120427   29935 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.key
	I0829 19:18:01.120446   29935 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.crt with IP's: []
	I0829 19:18:01.262965   29935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.crt ...
	I0829 19:18:01.262993   29935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.crt: {Name:mk3ba36e71f82511845306cdb8499effc15a4084 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:18:01.263171   29935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.key ...
	I0829 19:18:01.263188   29935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.key: {Name:mk57598d6159b95fb72f606eb5dac76361e83839 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:18:01.263284   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0829 19:18:01.263305   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0829 19:18:01.263319   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0829 19:18:01.263338   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0829 19:18:01.263357   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0829 19:18:01.263376   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0829 19:18:01.263397   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0829 19:18:01.263410   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0829 19:18:01.263463   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 19:18:01.263516   29935 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 19:18:01.263528   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 19:18:01.263561   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 19:18:01.263601   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:18:01.263633   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 19:18:01.263686   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 19:18:01.263729   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:18:01.263749   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem -> /usr/share/ca-certificates/18361.pem
	I0829 19:18:01.263767   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> /usr/share/ca-certificates/183612.pem
	I0829 19:18:01.264343   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:18:01.291491   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 19:18:01.317520   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:18:01.343084   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:18:01.368653   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0829 19:18:01.394069   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 19:18:01.419825   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:18:01.445128   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:18:01.471123   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:18:01.496657   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 19:18:01.522468   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 19:18:01.547552   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:18:01.564516   29935 ssh_runner.go:195] Run: openssl version
	I0829 19:18:01.570510   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:18:01.584760   29935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:18:01.589572   29935 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:18:01.589643   29935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:18:01.597434   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:18:01.609175   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 19:18:01.626990   29935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 19:18:01.632048   29935 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 19:18:01.632091   29935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 19:18:01.638820   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 19:18:01.655777   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 19:18:01.666565   29935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 19:18:01.671251   29935 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 19:18:01.671304   29935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 19:18:01.677127   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:18:01.689086   29935 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:18:01.693262   29935 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 19:18:01.693317   29935 kubeadm.go:392] StartCluster: {Name:ha-505269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-505269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:18:01.693393   29935 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:18:01.693474   29935 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:18:01.731104   29935 cri.go:89] found id: ""
	I0829 19:18:01.731174   29935 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:18:01.741306   29935 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:18:01.750983   29935 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:18:01.761155   29935 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:18:01.761178   29935 kubeadm.go:157] found existing configuration files:
	
	I0829 19:18:01.761228   29935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:18:01.770354   29935 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:18:01.770436   29935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:18:01.780505   29935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:18:01.790109   29935 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:18:01.790170   29935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:18:01.799982   29935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:18:01.809003   29935 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:18:01.809075   29935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:18:01.818607   29935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:18:01.827641   29935 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:18:01.827708   29935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:18:01.837298   29935 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:18:01.951203   29935 kubeadm.go:310] W0829 19:18:01.933236     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:18:01.951535   29935 kubeadm.go:310] W0829 19:18:01.934313     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:18:02.053114   29935 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:18:15.018731   29935 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 19:18:15.018799   29935 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:18:15.018918   29935 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:18:15.019063   29935 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:18:15.019206   29935 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 19:18:15.019291   29935 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:18:15.020690   29935 out.go:235]   - Generating certificates and keys ...
	I0829 19:18:15.020788   29935 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:18:15.020875   29935 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:18:15.020977   29935 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0829 19:18:15.021081   29935 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0829 19:18:15.021175   29935 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0829 19:18:15.021246   29935 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0829 19:18:15.021318   29935 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0829 19:18:15.021466   29935 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-505269 localhost] and IPs [192.168.39.56 127.0.0.1 ::1]
	I0829 19:18:15.021526   29935 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0829 19:18:15.021621   29935 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-505269 localhost] and IPs [192.168.39.56 127.0.0.1 ::1]
	I0829 19:18:15.021682   29935 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0829 19:18:15.021735   29935 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0829 19:18:15.021773   29935 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0829 19:18:15.021822   29935 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:18:15.021874   29935 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:18:15.021932   29935 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 19:18:15.022004   29935 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:18:15.022069   29935 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:18:15.022116   29935 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:18:15.022183   29935 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:18:15.022238   29935 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:18:15.023529   29935 out.go:235]   - Booting up control plane ...
	I0829 19:18:15.023609   29935 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:18:15.023674   29935 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:18:15.023730   29935 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:18:15.023817   29935 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:18:15.023909   29935 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:18:15.023969   29935 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:18:15.024091   29935 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 19:18:15.024185   29935 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 19:18:15.024244   29935 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.12668ms
	I0829 19:18:15.024313   29935 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 19:18:15.024365   29935 kubeadm.go:310] [api-check] The API server is healthy after 9.064092806s
	I0829 19:18:15.024457   29935 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 19:18:15.024567   29935 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 19:18:15.024624   29935 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 19:18:15.024819   29935 kubeadm.go:310] [mark-control-plane] Marking the node ha-505269 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 19:18:15.024879   29935 kubeadm.go:310] [bootstrap-token] Using token: dngxmm.tc10434umf6x6rzl
	I0829 19:18:15.026265   29935 out.go:235]   - Configuring RBAC rules ...
	I0829 19:18:15.026367   29935 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 19:18:15.026447   29935 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 19:18:15.026588   29935 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 19:18:15.026704   29935 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 19:18:15.026802   29935 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 19:18:15.026877   29935 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 19:18:15.026985   29935 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 19:18:15.027022   29935 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 19:18:15.027061   29935 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 19:18:15.027066   29935 kubeadm.go:310] 
	I0829 19:18:15.027115   29935 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 19:18:15.027121   29935 kubeadm.go:310] 
	I0829 19:18:15.027189   29935 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 19:18:15.027195   29935 kubeadm.go:310] 
	I0829 19:18:15.027219   29935 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 19:18:15.027271   29935 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 19:18:15.027316   29935 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 19:18:15.027322   29935 kubeadm.go:310] 
	I0829 19:18:15.027371   29935 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 19:18:15.027377   29935 kubeadm.go:310] 
	I0829 19:18:15.027419   29935 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 19:18:15.027426   29935 kubeadm.go:310] 
	I0829 19:18:15.027468   29935 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 19:18:15.027544   29935 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 19:18:15.027617   29935 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 19:18:15.027627   29935 kubeadm.go:310] 
	I0829 19:18:15.027697   29935 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 19:18:15.027760   29935 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 19:18:15.027765   29935 kubeadm.go:310] 
	I0829 19:18:15.027831   29935 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token dngxmm.tc10434umf6x6rzl \
	I0829 19:18:15.027919   29935 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef \
	I0829 19:18:15.027939   29935 kubeadm.go:310] 	--control-plane 
	I0829 19:18:15.027942   29935 kubeadm.go:310] 
	I0829 19:18:15.028024   29935 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 19:18:15.028036   29935 kubeadm.go:310] 
	I0829 19:18:15.028170   29935 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dngxmm.tc10434umf6x6rzl \
	I0829 19:18:15.028269   29935 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef 
	I0829 19:18:15.028284   29935 cni.go:84] Creating CNI manager for ""
	I0829 19:18:15.028291   29935 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0829 19:18:15.029603   29935 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0829 19:18:15.030662   29935 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0829 19:18:15.036448   29935 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0829 19:18:15.036462   29935 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0829 19:18:15.059690   29935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0829 19:18:15.467740   29935 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 19:18:15.467808   29935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:18:15.467809   29935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-505269 minikube.k8s.io/updated_at=2024_08_29T19_18_15_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033 minikube.k8s.io/name=ha-505269 minikube.k8s.io/primary=true
	I0829 19:18:15.607863   29935 ops.go:34] apiserver oom_adj: -16
	I0829 19:18:15.645301   29935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:18:16.146183   29935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:18:16.239962   29935 kubeadm.go:1113] duration metric: took 772.210554ms to wait for elevateKubeSystemPrivileges
	I0829 19:18:16.239995   29935 kubeadm.go:394] duration metric: took 14.546680574s to StartCluster
	I0829 19:18:16.240016   29935 settings.go:142] acquiring lock: {Name:mka4cd5ddff5796cd0ca11509c181178f4f73529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:18:16.240086   29935 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 19:18:16.240743   29935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:18:16.240974   29935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0829 19:18:16.240981   29935 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:18:16.241001   29935 start.go:241] waiting for startup goroutines ...
	I0829 19:18:16.241009   29935 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 19:18:16.241067   29935 addons.go:69] Setting storage-provisioner=true in profile "ha-505269"
	I0829 19:18:16.241083   29935 addons.go:69] Setting default-storageclass=true in profile "ha-505269"
	I0829 19:18:16.241140   29935 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-505269"
	I0829 19:18:16.241094   29935 addons.go:234] Setting addon storage-provisioner=true in "ha-505269"
	I0829 19:18:16.241210   29935 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:18:16.241220   29935 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:18:16.241511   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:18:16.241540   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:18:16.241622   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:18:16.241660   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:18:16.256286   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34699
	I0829 19:18:16.256706   29935 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:18:16.257236   29935 main.go:141] libmachine: Using API Version  1
	I0829 19:18:16.257257   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:18:16.257551   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:18:16.257779   29935 main.go:141] libmachine: (ha-505269) Calling .GetState
	I0829 19:18:16.259921   29935 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 19:18:16.260229   29935 kapi.go:59] client config for ha-505269: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.crt", KeyFile:"/home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.key", CAFile:"/home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0829 19:18:16.260393   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43955
	I0829 19:18:16.260745   29935 cert_rotation.go:140] Starting client certificate rotation controller
	I0829 19:18:16.260778   29935 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:18:16.261032   29935 addons.go:234] Setting addon default-storageclass=true in "ha-505269"
	I0829 19:18:16.261071   29935 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:18:16.261293   29935 main.go:141] libmachine: Using API Version  1
	I0829 19:18:16.261317   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:18:16.261450   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:18:16.261478   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:18:16.261638   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:18:16.262107   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:18:16.262128   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:18:16.275995   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40035
	I0829 19:18:16.276075   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43511
	I0829 19:18:16.276413   29935 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:18:16.276528   29935 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:18:16.276895   29935 main.go:141] libmachine: Using API Version  1
	I0829 19:18:16.276911   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:18:16.277018   29935 main.go:141] libmachine: Using API Version  1
	I0829 19:18:16.277041   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:18:16.277280   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:18:16.277322   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:18:16.277453   29935 main.go:141] libmachine: (ha-505269) Calling .GetState
	I0829 19:18:16.277871   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:18:16.277910   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:18:16.278885   29935 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:18:16.280814   29935 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:18:16.282027   29935 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:18:16.282048   29935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 19:18:16.282065   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:18:16.284679   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:18:16.285076   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:18:16.285105   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:18:16.285270   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:18:16.285440   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:18:16.285592   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:18:16.285742   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:18:16.293679   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39469
	I0829 19:18:16.294171   29935 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:18:16.294714   29935 main.go:141] libmachine: Using API Version  1
	I0829 19:18:16.294735   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:18:16.295020   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:18:16.295185   29935 main.go:141] libmachine: (ha-505269) Calling .GetState
	I0829 19:18:16.296542   29935 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:18:16.296752   29935 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 19:18:16.296769   29935 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 19:18:16.296785   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:18:16.299603   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:18:16.299974   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:18:16.299999   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:18:16.300137   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:18:16.300302   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:18:16.300471   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:18:16.300597   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:18:16.365131   29935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0829 19:18:16.401954   29935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:18:16.457570   29935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 19:18:16.847271   29935 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0829 19:18:17.135374   29935 main.go:141] libmachine: Making call to close driver server
	I0829 19:18:17.135397   29935 main.go:141] libmachine: (ha-505269) Calling .Close
	I0829 19:18:17.135420   29935 main.go:141] libmachine: Making call to close driver server
	I0829 19:18:17.135441   29935 main.go:141] libmachine: (ha-505269) Calling .Close
	I0829 19:18:17.135692   29935 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:18:17.135704   29935 main.go:141] libmachine: (ha-505269) DBG | Closing plugin on server side
	I0829 19:18:17.135708   29935 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:18:17.135719   29935 main.go:141] libmachine: Making call to close driver server
	I0829 19:18:17.135725   29935 main.go:141] libmachine: (ha-505269) Calling .Close
	I0829 19:18:17.135741   29935 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:18:17.135750   29935 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:18:17.135761   29935 main.go:141] libmachine: Making call to close driver server
	I0829 19:18:17.135769   29935 main.go:141] libmachine: (ha-505269) Calling .Close
	I0829 19:18:17.136026   29935 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:18:17.136039   29935 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:18:17.136069   29935 main.go:141] libmachine: (ha-505269) DBG | Closing plugin on server side
	I0829 19:18:17.136108   29935 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:18:17.136115   29935 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:18:17.136176   29935 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0829 19:18:17.136195   29935 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0829 19:18:17.136295   29935 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0829 19:18:17.136307   29935 round_trippers.go:469] Request Headers:
	I0829 19:18:17.136319   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:18:17.136327   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:18:17.149354   29935 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0829 19:18:17.150047   29935 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0829 19:18:17.150060   29935 round_trippers.go:469] Request Headers:
	I0829 19:18:17.150067   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:18:17.150070   29935 round_trippers.go:473]     Content-Type: application/json
	I0829 19:18:17.150075   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:18:17.154072   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:18:17.154260   29935 main.go:141] libmachine: Making call to close driver server
	I0829 19:18:17.154274   29935 main.go:141] libmachine: (ha-505269) Calling .Close
	I0829 19:18:17.154565   29935 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:18:17.154595   29935 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:18:17.154571   29935 main.go:141] libmachine: (ha-505269) DBG | Closing plugin on server side
	I0829 19:18:17.156214   29935 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0829 19:18:17.157534   29935 addons.go:510] duration metric: took 916.52286ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0829 19:18:17.157564   29935 start.go:246] waiting for cluster config update ...
	I0829 19:18:17.157575   29935 start.go:255] writing updated cluster config ...
	I0829 19:18:17.159049   29935 out.go:201] 
	I0829 19:18:17.160300   29935 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:18:17.160364   29935 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/config.json ...
	I0829 19:18:17.161879   29935 out.go:177] * Starting "ha-505269-m02" control-plane node in "ha-505269" cluster
	I0829 19:18:17.163085   29935 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:18:17.163108   29935 cache.go:56] Caching tarball of preloaded images
	I0829 19:18:17.163181   29935 preload.go:172] Found /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 19:18:17.163192   29935 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 19:18:17.163254   29935 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/config.json ...
	I0829 19:18:17.163418   29935 start.go:360] acquireMachinesLock for ha-505269-m02: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:18:17.163456   29935 start.go:364] duration metric: took 20.467µs to acquireMachinesLock for "ha-505269-m02"
	I0829 19:18:17.163471   29935 start.go:93] Provisioning new machine with config: &{Name:ha-505269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-505269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:18:17.163543   29935 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0829 19:18:17.165179   29935 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 19:18:17.165244   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:18:17.165265   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:18:17.179457   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40851
	I0829 19:18:17.179807   29935 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:18:17.180265   29935 main.go:141] libmachine: Using API Version  1
	I0829 19:18:17.180281   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:18:17.180573   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:18:17.180779   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetMachineName
	I0829 19:18:17.180943   29935 main.go:141] libmachine: (ha-505269-m02) Calling .DriverName
	I0829 19:18:17.181129   29935 start.go:159] libmachine.API.Create for "ha-505269" (driver="kvm2")
	I0829 19:18:17.181188   29935 client.go:168] LocalClient.Create starting
	I0829 19:18:17.181215   29935 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem
	I0829 19:18:17.181243   29935 main.go:141] libmachine: Decoding PEM data...
	I0829 19:18:17.181256   29935 main.go:141] libmachine: Parsing certificate...
	I0829 19:18:17.181308   29935 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem
	I0829 19:18:17.181334   29935 main.go:141] libmachine: Decoding PEM data...
	I0829 19:18:17.181355   29935 main.go:141] libmachine: Parsing certificate...
	I0829 19:18:17.181381   29935 main.go:141] libmachine: Running pre-create checks...
	I0829 19:18:17.181392   29935 main.go:141] libmachine: (ha-505269-m02) Calling .PreCreateCheck
	I0829 19:18:17.181557   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetConfigRaw
	I0829 19:18:17.181919   29935 main.go:141] libmachine: Creating machine...
	I0829 19:18:17.181938   29935 main.go:141] libmachine: (ha-505269-m02) Calling .Create
	I0829 19:18:17.182049   29935 main.go:141] libmachine: (ha-505269-m02) Creating KVM machine...
	I0829 19:18:17.183313   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found existing default KVM network
	I0829 19:18:17.183432   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found existing private KVM network mk-ha-505269
	I0829 19:18:17.183562   29935 main.go:141] libmachine: (ha-505269-m02) Setting up store path in /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02 ...
	I0829 19:18:17.183581   29935 main.go:141] libmachine: (ha-505269-m02) Building disk image from file:///home/jenkins/minikube-integration/19530-11185/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso
	I0829 19:18:17.183637   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:17.183533   30766 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 19:18:17.183731   29935 main.go:141] libmachine: (ha-505269-m02) Downloading /home/jenkins/minikube-integration/19530-11185/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19530-11185/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso...
	I0829 19:18:17.414248   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:17.414122   30766 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/id_rsa...
	I0829 19:18:17.506783   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:17.506675   30766 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/ha-505269-m02.rawdisk...
	I0829 19:18:17.506820   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Writing magic tar header
	I0829 19:18:17.506860   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Writing SSH key tar header
	I0829 19:18:17.506877   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:17.506807   30766 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02 ...
	I0829 19:18:17.506966   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02
	I0829 19:18:17.507001   29935 main.go:141] libmachine: (ha-505269-m02) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02 (perms=drwx------)
	I0829 19:18:17.507018   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube/machines
	I0829 19:18:17.507036   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 19:18:17.507054   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185
	I0829 19:18:17.507070   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0829 19:18:17.507088   29935 main.go:141] libmachine: (ha-505269-m02) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube/machines (perms=drwxr-xr-x)
	I0829 19:18:17.507100   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Checking permissions on dir: /home/jenkins
	I0829 19:18:17.507114   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Checking permissions on dir: /home
	I0829 19:18:17.507124   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Skipping /home - not owner
	I0829 19:18:17.507137   29935 main.go:141] libmachine: (ha-505269-m02) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube (perms=drwxr-xr-x)
	I0829 19:18:17.507146   29935 main.go:141] libmachine: (ha-505269-m02) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185 (perms=drwxrwxr-x)
	I0829 19:18:17.507162   29935 main.go:141] libmachine: (ha-505269-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0829 19:18:17.507173   29935 main.go:141] libmachine: (ha-505269-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0829 19:18:17.507218   29935 main.go:141] libmachine: (ha-505269-m02) Creating domain...
	I0829 19:18:17.508099   29935 main.go:141] libmachine: (ha-505269-m02) define libvirt domain using xml: 
	I0829 19:18:17.508111   29935 main.go:141] libmachine: (ha-505269-m02) <domain type='kvm'>
	I0829 19:18:17.508118   29935 main.go:141] libmachine: (ha-505269-m02)   <name>ha-505269-m02</name>
	I0829 19:18:17.508122   29935 main.go:141] libmachine: (ha-505269-m02)   <memory unit='MiB'>2200</memory>
	I0829 19:18:17.508128   29935 main.go:141] libmachine: (ha-505269-m02)   <vcpu>2</vcpu>
	I0829 19:18:17.508132   29935 main.go:141] libmachine: (ha-505269-m02)   <features>
	I0829 19:18:17.508137   29935 main.go:141] libmachine: (ha-505269-m02)     <acpi/>
	I0829 19:18:17.508142   29935 main.go:141] libmachine: (ha-505269-m02)     <apic/>
	I0829 19:18:17.508146   29935 main.go:141] libmachine: (ha-505269-m02)     <pae/>
	I0829 19:18:17.508151   29935 main.go:141] libmachine: (ha-505269-m02)     
	I0829 19:18:17.508157   29935 main.go:141] libmachine: (ha-505269-m02)   </features>
	I0829 19:18:17.508161   29935 main.go:141] libmachine: (ha-505269-m02)   <cpu mode='host-passthrough'>
	I0829 19:18:17.508166   29935 main.go:141] libmachine: (ha-505269-m02)   
	I0829 19:18:17.508170   29935 main.go:141] libmachine: (ha-505269-m02)   </cpu>
	I0829 19:18:17.508175   29935 main.go:141] libmachine: (ha-505269-m02)   <os>
	I0829 19:18:17.508183   29935 main.go:141] libmachine: (ha-505269-m02)     <type>hvm</type>
	I0829 19:18:17.508188   29935 main.go:141] libmachine: (ha-505269-m02)     <boot dev='cdrom'/>
	I0829 19:18:17.508201   29935 main.go:141] libmachine: (ha-505269-m02)     <boot dev='hd'/>
	I0829 19:18:17.508211   29935 main.go:141] libmachine: (ha-505269-m02)     <bootmenu enable='no'/>
	I0829 19:18:17.508230   29935 main.go:141] libmachine: (ha-505269-m02)   </os>
	I0829 19:18:17.508239   29935 main.go:141] libmachine: (ha-505269-m02)   <devices>
	I0829 19:18:17.508253   29935 main.go:141] libmachine: (ha-505269-m02)     <disk type='file' device='cdrom'>
	I0829 19:18:17.508270   29935 main.go:141] libmachine: (ha-505269-m02)       <source file='/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/boot2docker.iso'/>
	I0829 19:18:17.508278   29935 main.go:141] libmachine: (ha-505269-m02)       <target dev='hdc' bus='scsi'/>
	I0829 19:18:17.508284   29935 main.go:141] libmachine: (ha-505269-m02)       <readonly/>
	I0829 19:18:17.508291   29935 main.go:141] libmachine: (ha-505269-m02)     </disk>
	I0829 19:18:17.508298   29935 main.go:141] libmachine: (ha-505269-m02)     <disk type='file' device='disk'>
	I0829 19:18:17.508317   29935 main.go:141] libmachine: (ha-505269-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0829 19:18:17.508345   29935 main.go:141] libmachine: (ha-505269-m02)       <source file='/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/ha-505269-m02.rawdisk'/>
	I0829 19:18:17.508368   29935 main.go:141] libmachine: (ha-505269-m02)       <target dev='hda' bus='virtio'/>
	I0829 19:18:17.508377   29935 main.go:141] libmachine: (ha-505269-m02)     </disk>
	I0829 19:18:17.508394   29935 main.go:141] libmachine: (ha-505269-m02)     <interface type='network'>
	I0829 19:18:17.508408   29935 main.go:141] libmachine: (ha-505269-m02)       <source network='mk-ha-505269'/>
	I0829 19:18:17.508418   29935 main.go:141] libmachine: (ha-505269-m02)       <model type='virtio'/>
	I0829 19:18:17.508429   29935 main.go:141] libmachine: (ha-505269-m02)     </interface>
	I0829 19:18:17.508443   29935 main.go:141] libmachine: (ha-505269-m02)     <interface type='network'>
	I0829 19:18:17.508459   29935 main.go:141] libmachine: (ha-505269-m02)       <source network='default'/>
	I0829 19:18:17.508469   29935 main.go:141] libmachine: (ha-505269-m02)       <model type='virtio'/>
	I0829 19:18:17.508477   29935 main.go:141] libmachine: (ha-505269-m02)     </interface>
	I0829 19:18:17.508488   29935 main.go:141] libmachine: (ha-505269-m02)     <serial type='pty'>
	I0829 19:18:17.508500   29935 main.go:141] libmachine: (ha-505269-m02)       <target port='0'/>
	I0829 19:18:17.508510   29935 main.go:141] libmachine: (ha-505269-m02)     </serial>
	I0829 19:18:17.508574   29935 main.go:141] libmachine: (ha-505269-m02)     <console type='pty'>
	I0829 19:18:17.508610   29935 main.go:141] libmachine: (ha-505269-m02)       <target type='serial' port='0'/>
	I0829 19:18:17.508630   29935 main.go:141] libmachine: (ha-505269-m02)     </console>
	I0829 19:18:17.508636   29935 main.go:141] libmachine: (ha-505269-m02)     <rng model='virtio'>
	I0829 19:18:17.508645   29935 main.go:141] libmachine: (ha-505269-m02)       <backend model='random'>/dev/random</backend>
	I0829 19:18:17.508651   29935 main.go:141] libmachine: (ha-505269-m02)     </rng>
	I0829 19:18:17.508656   29935 main.go:141] libmachine: (ha-505269-m02)     
	I0829 19:18:17.508664   29935 main.go:141] libmachine: (ha-505269-m02)     
	I0829 19:18:17.508669   29935 main.go:141] libmachine: (ha-505269-m02)   </devices>
	I0829 19:18:17.508680   29935 main.go:141] libmachine: (ha-505269-m02) </domain>
	I0829 19:18:17.508689   29935 main.go:141] libmachine: (ha-505269-m02) 
	I0829 19:18:17.515226   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:d3:7f:c5 in network default
	I0829 19:18:17.515807   29935 main.go:141] libmachine: (ha-505269-m02) Ensuring networks are active...
	I0829 19:18:17.515840   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:17.516459   29935 main.go:141] libmachine: (ha-505269-m02) Ensuring network default is active
	I0829 19:18:17.516883   29935 main.go:141] libmachine: (ha-505269-m02) Ensuring network mk-ha-505269 is active
	I0829 19:18:17.517209   29935 main.go:141] libmachine: (ha-505269-m02) Getting domain xml...
	I0829 19:18:17.518082   29935 main.go:141] libmachine: (ha-505269-m02) Creating domain...
	I0829 19:18:18.727292   29935 main.go:141] libmachine: (ha-505269-m02) Waiting to get IP...
	I0829 19:18:18.728041   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:18.728417   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find current IP address of domain ha-505269-m02 in network mk-ha-505269
	I0829 19:18:18.728459   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:18.728405   30766 retry.go:31] will retry after 257.317268ms: waiting for machine to come up
	I0829 19:18:18.986959   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:18.987385   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find current IP address of domain ha-505269-m02 in network mk-ha-505269
	I0829 19:18:18.987411   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:18.987357   30766 retry.go:31] will retry after 254.624589ms: waiting for machine to come up
	I0829 19:18:19.243886   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:19.244406   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find current IP address of domain ha-505269-m02 in network mk-ha-505269
	I0829 19:18:19.244432   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:19.244326   30766 retry.go:31] will retry after 465.137393ms: waiting for machine to come up
	I0829 19:18:19.710980   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:19.711406   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find current IP address of domain ha-505269-m02 in network mk-ha-505269
	I0829 19:18:19.711434   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:19.711357   30766 retry.go:31] will retry after 421.01646ms: waiting for machine to come up
	I0829 19:18:20.133506   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:20.133931   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find current IP address of domain ha-505269-m02 in network mk-ha-505269
	I0829 19:18:20.133954   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:20.133896   30766 retry.go:31] will retry after 665.095868ms: waiting for machine to come up
	I0829 19:18:20.800645   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:20.801073   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find current IP address of domain ha-505269-m02 in network mk-ha-505269
	I0829 19:18:20.801103   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:20.801016   30766 retry.go:31] will retry after 771.303274ms: waiting for machine to come up
	I0829 19:18:21.573835   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:21.574225   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find current IP address of domain ha-505269-m02 in network mk-ha-505269
	I0829 19:18:21.574245   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:21.574188   30766 retry.go:31] will retry after 1.037740689s: waiting for machine to come up
	I0829 19:18:22.613724   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:22.614106   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find current IP address of domain ha-505269-m02 in network mk-ha-505269
	I0829 19:18:22.614128   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:22.614066   30766 retry.go:31] will retry after 1.332280696s: waiting for machine to come up
	I0829 19:18:23.947614   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:23.948022   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find current IP address of domain ha-505269-m02 in network mk-ha-505269
	I0829 19:18:23.948049   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:23.947980   30766 retry.go:31] will retry after 1.862236314s: waiting for machine to come up
	I0829 19:18:25.812946   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:25.813370   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find current IP address of domain ha-505269-m02 in network mk-ha-505269
	I0829 19:18:25.813391   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:25.813327   30766 retry.go:31] will retry after 1.70488661s: waiting for machine to come up
	I0829 19:18:27.520272   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:27.520750   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find current IP address of domain ha-505269-m02 in network mk-ha-505269
	I0829 19:18:27.520777   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:27.520712   30766 retry.go:31] will retry after 1.968849341s: waiting for machine to come up
	I0829 19:18:29.491671   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:29.492113   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find current IP address of domain ha-505269-m02 in network mk-ha-505269
	I0829 19:18:29.492135   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:29.492078   30766 retry.go:31] will retry after 3.419516708s: waiting for machine to come up
	I0829 19:18:32.913606   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:32.914076   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find current IP address of domain ha-505269-m02 in network mk-ha-505269
	I0829 19:18:32.914102   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:32.914032   30766 retry.go:31] will retry after 3.557791272s: waiting for machine to come up
	I0829 19:18:36.475527   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:36.475977   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find current IP address of domain ha-505269-m02 in network mk-ha-505269
	I0829 19:18:36.475995   29935 main.go:141] libmachine: (ha-505269-m02) DBG | I0829 19:18:36.475950   30766 retry.go:31] will retry after 5.363647101s: waiting for machine to come up
	I0829 19:18:41.844946   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:41.845429   29935 main.go:141] libmachine: (ha-505269-m02) Found IP for machine: 192.168.39.68
	I0829 19:18:41.845445   29935 main.go:141] libmachine: (ha-505269-m02) Reserving static IP address...
	I0829 19:18:41.845454   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has current primary IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:41.845751   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find host DHCP lease matching {name: "ha-505269-m02", mac: "52:54:00:8f:ef:8c", ip: "192.168.39.68"} in network mk-ha-505269
	I0829 19:18:41.914922   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Getting to WaitForSSH function...
	I0829 19:18:41.914952   29935 main.go:141] libmachine: (ha-505269-m02) Reserved static IP address: 192.168.39.68
	I0829 19:18:41.914966   29935 main.go:141] libmachine: (ha-505269-m02) Waiting for SSH to be available...
	I0829 19:18:41.917290   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:41.917533   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269
	I0829 19:18:41.917567   29935 main.go:141] libmachine: (ha-505269-m02) DBG | unable to find defined IP address of network mk-ha-505269 interface with MAC address 52:54:00:8f:ef:8c
	I0829 19:18:41.917682   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Using SSH client type: external
	I0829 19:18:41.917710   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/id_rsa (-rw-------)
	I0829 19:18:41.917745   29935 main.go:141] libmachine: (ha-505269-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:18:41.917773   29935 main.go:141] libmachine: (ha-505269-m02) DBG | About to run SSH command:
	I0829 19:18:41.917787   29935 main.go:141] libmachine: (ha-505269-m02) DBG | exit 0
	I0829 19:18:41.921232   29935 main.go:141] libmachine: (ha-505269-m02) DBG | SSH cmd err, output: exit status 255: 
	I0829 19:18:41.921246   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0829 19:18:41.921253   29935 main.go:141] libmachine: (ha-505269-m02) DBG | command : exit 0
	I0829 19:18:41.921278   29935 main.go:141] libmachine: (ha-505269-m02) DBG | err     : exit status 255
	I0829 19:18:41.921286   29935 main.go:141] libmachine: (ha-505269-m02) DBG | output  : 
	I0829 19:18:44.922856   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Getting to WaitForSSH function...
	I0829 19:18:44.925424   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:44.925805   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:44.925830   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:44.925881   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Using SSH client type: external
	I0829 19:18:44.925896   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/id_rsa (-rw-------)
	I0829 19:18:44.925943   29935 main.go:141] libmachine: (ha-505269-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.68 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:18:44.925963   29935 main.go:141] libmachine: (ha-505269-m02) DBG | About to run SSH command:
	I0829 19:18:44.925971   29935 main.go:141] libmachine: (ha-505269-m02) DBG | exit 0
	I0829 19:18:45.050615   29935 main.go:141] libmachine: (ha-505269-m02) DBG | SSH cmd err, output: <nil>: 
	I0829 19:18:45.050899   29935 main.go:141] libmachine: (ha-505269-m02) KVM machine creation complete!
	I0829 19:18:45.051366   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetConfigRaw
	I0829 19:18:45.051902   29935 main.go:141] libmachine: (ha-505269-m02) Calling .DriverName
	I0829 19:18:45.052089   29935 main.go:141] libmachine: (ha-505269-m02) Calling .DriverName
	I0829 19:18:45.052233   29935 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0829 19:18:45.052246   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetState
	I0829 19:18:45.053465   29935 main.go:141] libmachine: Detecting operating system of created instance...
	I0829 19:18:45.053485   29935 main.go:141] libmachine: Waiting for SSH to be available...
	I0829 19:18:45.053498   29935 main.go:141] libmachine: Getting to WaitForSSH function...
	I0829 19:18:45.053506   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:18:45.055414   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.055689   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:45.055717   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.055861   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:18:45.056036   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:45.056175   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:45.056313   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:18:45.056473   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:18:45.056766   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0829 19:18:45.056784   29935 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0829 19:18:45.157985   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:18:45.158007   29935 main.go:141] libmachine: Detecting the provisioner...
	I0829 19:18:45.158013   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:18:45.160876   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.161252   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:45.161280   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.161430   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:18:45.161612   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:45.161764   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:45.161907   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:18:45.162032   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:18:45.162183   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0829 19:18:45.162193   29935 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0829 19:18:45.263343   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0829 19:18:45.263407   29935 main.go:141] libmachine: found compatible host: buildroot
	I0829 19:18:45.263418   29935 main.go:141] libmachine: Provisioning with buildroot...
	I0829 19:18:45.263429   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetMachineName
	I0829 19:18:45.263676   29935 buildroot.go:166] provisioning hostname "ha-505269-m02"
	I0829 19:18:45.263694   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetMachineName
	I0829 19:18:45.263810   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:18:45.266331   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.266705   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:45.266733   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.266874   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:18:45.267050   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:45.267193   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:45.267339   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:18:45.267601   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:18:45.268263   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0829 19:18:45.268292   29935 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-505269-m02 && echo "ha-505269-m02" | sudo tee /etc/hostname
	I0829 19:18:45.385767   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-505269-m02
	
	I0829 19:18:45.385796   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:18:45.388371   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.388707   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:45.388731   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.388910   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:18:45.389092   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:45.389215   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:45.389347   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:18:45.389492   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:18:45.389700   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0829 19:18:45.389717   29935 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-505269-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-505269-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-505269-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:18:45.499244   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:18:45.499269   29935 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 19:18:45.499283   29935 buildroot.go:174] setting up certificates
	I0829 19:18:45.499292   29935 provision.go:84] configureAuth start
	I0829 19:18:45.499299   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetMachineName
	I0829 19:18:45.499545   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetIP
	I0829 19:18:45.502213   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.502584   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:45.502613   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.502734   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:18:45.505024   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.505377   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:45.505406   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.505526   29935 provision.go:143] copyHostCerts
	I0829 19:18:45.505558   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 19:18:45.505591   29935 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 19:18:45.505600   29935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 19:18:45.505676   29935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 19:18:45.505757   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 19:18:45.505775   29935 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 19:18:45.505782   29935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 19:18:45.505806   29935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 19:18:45.505861   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 19:18:45.505878   29935 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 19:18:45.505885   29935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 19:18:45.505907   29935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 19:18:45.505963   29935 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.ha-505269-m02 san=[127.0.0.1 192.168.39.68 ha-505269-m02 localhost minikube]
	I0829 19:18:45.659845   29935 provision.go:177] copyRemoteCerts
	I0829 19:18:45.659894   29935 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:18:45.659916   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:18:45.662522   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.662891   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:45.662919   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.663073   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:18:45.663266   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:45.663415   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:18:45.663521   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/id_rsa Username:docker}
	I0829 19:18:45.744501   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0829 19:18:45.744568   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 19:18:45.768821   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0829 19:18:45.768885   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0829 19:18:45.792457   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0829 19:18:45.792524   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 19:18:45.816617   29935 provision.go:87] duration metric: took 317.314234ms to configureAuth
	I0829 19:18:45.816644   29935 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:18:45.816837   29935 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:18:45.816925   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:18:45.819505   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.819990   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:45.820014   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:45.820120   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:18:45.820291   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:45.820435   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:45.820563   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:18:45.820726   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:18:45.820922   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0829 19:18:45.820945   29935 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:18:46.040607   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:18:46.040633   29935 main.go:141] libmachine: Checking connection to Docker...
	I0829 19:18:46.040641   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetURL
	I0829 19:18:46.041890   29935 main.go:141] libmachine: (ha-505269-m02) DBG | Using libvirt version 6000000
	I0829 19:18:46.044049   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:46.044409   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:46.044435   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:46.044632   29935 main.go:141] libmachine: Docker is up and running!
	I0829 19:18:46.044656   29935 main.go:141] libmachine: Reticulating splines...
	I0829 19:18:46.044664   29935 client.go:171] duration metric: took 28.863466105s to LocalClient.Create
	I0829 19:18:46.044683   29935 start.go:167] duration metric: took 28.863557501s to libmachine.API.Create "ha-505269"
	I0829 19:18:46.044698   29935 start.go:293] postStartSetup for "ha-505269-m02" (driver="kvm2")
	I0829 19:18:46.044709   29935 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:18:46.044733   29935 main.go:141] libmachine: (ha-505269-m02) Calling .DriverName
	I0829 19:18:46.044966   29935 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:18:46.044986   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:18:46.047304   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:46.047633   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:46.047656   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:46.047794   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:18:46.047983   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:46.048150   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:18:46.048274   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/id_rsa Username:docker}
	I0829 19:18:46.129008   29935 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:18:46.133154   29935 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:18:46.133176   29935 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 19:18:46.133235   29935 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 19:18:46.133355   29935 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 19:18:46.133369   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> /etc/ssl/certs/183612.pem
	I0829 19:18:46.133476   29935 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:18:46.143092   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 19:18:46.166129   29935 start.go:296] duration metric: took 121.41997ms for postStartSetup
	I0829 19:18:46.166172   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetConfigRaw
	I0829 19:18:46.166736   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetIP
	I0829 19:18:46.169520   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:46.169856   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:46.169886   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:46.170095   29935 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/config.json ...
	I0829 19:18:46.170341   29935 start.go:128] duration metric: took 29.006780061s to createHost
	I0829 19:18:46.170364   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:18:46.172864   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:46.173186   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:46.173206   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:46.173331   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:18:46.173504   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:46.173650   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:46.173882   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:18:46.174042   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:18:46.174192   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0829 19:18:46.174203   29935 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:18:46.275422   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724959126.256599612
	
	I0829 19:18:46.275447   29935 fix.go:216] guest clock: 1724959126.256599612
	I0829 19:18:46.275457   29935 fix.go:229] Guest: 2024-08-29 19:18:46.256599612 +0000 UTC Remote: 2024-08-29 19:18:46.170353909 +0000 UTC m=+78.244806111 (delta=86.245703ms)
	I0829 19:18:46.275474   29935 fix.go:200] guest clock delta is within tolerance: 86.245703ms
	I0829 19:18:46.275479   29935 start.go:83] releasing machines lock for "ha-505269-m02", held for 29.112015583s
	I0829 19:18:46.275496   29935 main.go:141] libmachine: (ha-505269-m02) Calling .DriverName
	I0829 19:18:46.275720   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetIP
	I0829 19:18:46.278387   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:46.278696   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:46.278738   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:46.281118   29935 out.go:177] * Found network options:
	I0829 19:18:46.282408   29935 out.go:177]   - NO_PROXY=192.168.39.56
	W0829 19:18:46.283673   29935 proxy.go:119] fail to check proxy env: Error ip not in block
	I0829 19:18:46.283699   29935 main.go:141] libmachine: (ha-505269-m02) Calling .DriverName
	I0829 19:18:46.284172   29935 main.go:141] libmachine: (ha-505269-m02) Calling .DriverName
	I0829 19:18:46.284346   29935 main.go:141] libmachine: (ha-505269-m02) Calling .DriverName
	I0829 19:18:46.284422   29935 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:18:46.284467   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	W0829 19:18:46.284533   29935 proxy.go:119] fail to check proxy env: Error ip not in block
	I0829 19:18:46.284621   29935 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:18:46.284644   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:18:46.287165   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:46.287480   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:46.287507   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:46.287565   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:46.287640   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:18:46.287795   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:46.287948   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:18:46.287973   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:46.287995   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:46.288138   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/id_rsa Username:docker}
	I0829 19:18:46.288151   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:18:46.288283   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:18:46.288401   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:18:46.288521   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/id_rsa Username:docker}
	I0829 19:18:46.522694   29935 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:18:46.528951   29935 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:18:46.529018   29935 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:18:46.545655   29935 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:18:46.545682   29935 start.go:495] detecting cgroup driver to use...
	I0829 19:18:46.545755   29935 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:18:46.563666   29935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:18:46.578069   29935 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:18:46.578140   29935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:18:46.591691   29935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:18:46.605288   29935 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:18:46.721052   29935 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:18:46.879427   29935 docker.go:233] disabling docker service ...
	I0829 19:18:46.879503   29935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:18:46.892990   29935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:18:46.905147   29935 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:18:47.024064   29935 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:18:47.135402   29935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:18:47.150718   29935 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:18:47.171454   29935 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:18:47.171520   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:18:47.182255   29935 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:18:47.182314   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:18:47.193567   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:18:47.204104   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:18:47.214516   29935 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:18:47.225095   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:18:47.235245   29935 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:18:47.252165   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:18:47.262133   29935 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:18:47.271342   29935 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:18:47.271394   29935 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:18:47.284521   29935 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:18:47.293856   29935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:18:47.413333   29935 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:18:47.507972   29935 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:18:47.508048   29935 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:18:47.513387   29935 start.go:563] Will wait 60s for crictl version
	I0829 19:18:47.513436   29935 ssh_runner.go:195] Run: which crictl
	I0829 19:18:47.517331   29935 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:18:47.556504   29935 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:18:47.556589   29935 ssh_runner.go:195] Run: crio --version
	I0829 19:18:47.583593   29935 ssh_runner.go:195] Run: crio --version
	I0829 19:18:47.611532   29935 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:18:47.612939   29935 out.go:177]   - env NO_PROXY=192.168.39.56
	I0829 19:18:47.614130   29935 main.go:141] libmachine: (ha-505269-m02) Calling .GetIP
	I0829 19:18:47.616737   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:47.617064   29935 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:18:31 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:18:47.617090   29935 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:18:47.617248   29935 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 19:18:47.621436   29935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:18:47.634030   29935 mustload.go:65] Loading cluster: ha-505269
	I0829 19:18:47.634197   29935 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:18:47.634429   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:18:47.634463   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:18:47.648704   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46269
	I0829 19:18:47.649095   29935 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:18:47.649539   29935 main.go:141] libmachine: Using API Version  1
	I0829 19:18:47.649559   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:18:47.649846   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:18:47.650017   29935 main.go:141] libmachine: (ha-505269) Calling .GetState
	I0829 19:18:47.651451   29935 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:18:47.651738   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:18:47.651769   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:18:47.665692   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33097
	I0829 19:18:47.666066   29935 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:18:47.666478   29935 main.go:141] libmachine: Using API Version  1
	I0829 19:18:47.666500   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:18:47.666814   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:18:47.667003   29935 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:18:47.667151   29935 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269 for IP: 192.168.39.68
	I0829 19:18:47.667161   29935 certs.go:194] generating shared ca certs ...
	I0829 19:18:47.667174   29935 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:18:47.667290   29935 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 19:18:47.667326   29935 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 19:18:47.667336   29935 certs.go:256] generating profile certs ...
	I0829 19:18:47.667400   29935 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.key
	I0829 19:18:47.667424   29935 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.fa582b21
	I0829 19:18:47.667437   29935 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.fa582b21 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.56 192.168.39.68 192.168.39.254]
	I0829 19:18:47.779724   29935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.fa582b21 ...
	I0829 19:18:47.779750   29935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.fa582b21: {Name:mk2f08942257e15c7321f8b69b6f00a9a29cc1ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:18:47.779938   29935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.fa582b21 ...
	I0829 19:18:47.779955   29935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.fa582b21: {Name:mk67cb8bda9cb7e09ef73f9ddd0a839032dff9ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:18:47.780057   29935 certs.go:381] copying /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.fa582b21 -> /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt
	I0829 19:18:47.780184   29935 certs.go:385] copying /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.fa582b21 -> /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key
	I0829 19:18:47.780306   29935 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.key
	I0829 19:18:47.780320   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0829 19:18:47.780332   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0829 19:18:47.780343   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0829 19:18:47.780357   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0829 19:18:47.780367   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0829 19:18:47.780380   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0829 19:18:47.780390   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0829 19:18:47.780402   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0829 19:18:47.780446   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 19:18:47.780472   29935 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 19:18:47.780482   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 19:18:47.780504   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 19:18:47.780526   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:18:47.780547   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 19:18:47.780582   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 19:18:47.780607   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> /usr/share/ca-certificates/183612.pem
	I0829 19:18:47.780621   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:18:47.780633   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem -> /usr/share/ca-certificates/18361.pem
	I0829 19:18:47.780664   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:18:47.783577   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:18:47.783987   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:18:47.784014   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:18:47.784163   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:18:47.784380   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:18:47.784502   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:18:47.784657   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:18:47.862835   29935 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0829 19:18:47.868350   29935 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0829 19:18:47.879589   29935 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0829 19:18:47.883909   29935 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0829 19:18:47.893791   29935 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0829 19:18:47.897869   29935 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0829 19:18:47.907713   29935 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0829 19:18:47.911588   29935 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0829 19:18:47.921455   29935 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0829 19:18:47.925543   29935 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0829 19:18:47.935225   29935 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0829 19:18:47.939147   29935 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0829 19:18:47.948762   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:18:47.973508   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 19:18:47.996256   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:18:48.020027   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:18:48.043059   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0829 19:18:48.066056   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:18:48.089033   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:18:48.117817   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:18:48.141556   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 19:18:48.164685   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:18:48.187993   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 19:18:48.212550   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0829 19:18:48.231237   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0829 19:18:48.248744   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0829 19:18:48.266514   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0829 19:18:48.284259   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0829 19:18:48.302024   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0829 19:18:48.319866   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0829 19:18:48.338258   29935 ssh_runner.go:195] Run: openssl version
	I0829 19:18:48.344326   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:18:48.356785   29935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:18:48.361601   29935 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:18:48.361665   29935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:18:48.367593   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:18:48.378547   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 19:18:48.389272   29935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 19:18:48.393770   29935 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 19:18:48.393815   29935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 19:18:48.401273   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 19:18:48.412981   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 19:18:48.424575   29935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 19:18:48.428932   29935 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 19:18:48.428973   29935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 19:18:48.434621   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:18:48.445294   29935 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:18:48.449364   29935 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 19:18:48.449413   29935 kubeadm.go:934] updating node {m02 192.168.39.68 8443 v1.31.0 crio true true} ...
	I0829 19:18:48.449506   29935 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-505269-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-505269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:18:48.449531   29935 kube-vip.go:115] generating kube-vip config ...
	I0829 19:18:48.449567   29935 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0829 19:18:48.466847   29935 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0829 19:18:48.466897   29935 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0829 19:18:48.466950   29935 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:18:48.477209   29935 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0829 19:18:48.477270   29935 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0829 19:18:48.487325   29935 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0829 19:18:48.487357   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0829 19:18:48.487381   29935 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19530-11185/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0829 19:18:48.487427   29935 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19530-11185/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0829 19:18:48.487442   29935 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0829 19:18:48.491913   29935 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0829 19:18:48.491934   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0829 19:18:49.305833   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0829 19:18:49.305931   29935 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0829 19:18:49.310975   29935 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0829 19:18:49.311006   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0829 19:18:49.333293   29935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:18:49.358083   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0829 19:18:49.358170   29935 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0829 19:18:49.368962   29935 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0829 19:18:49.369003   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0829 19:18:49.861662   29935 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0829 19:18:49.871762   29935 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0829 19:18:49.888780   29935 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:18:49.904605   29935 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0829 19:18:49.920339   29935 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0829 19:18:49.924279   29935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:18:49.935982   29935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:18:50.057650   29935 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:18:50.075230   29935 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:18:50.075718   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:18:50.075772   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:18:50.090220   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35719
	I0829 19:18:50.090630   29935 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:18:50.091089   29935 main.go:141] libmachine: Using API Version  1
	I0829 19:18:50.091110   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:18:50.091420   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:18:50.091638   29935 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:18:50.091792   29935 start.go:317] joinCluster: &{Name:ha-505269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-505269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:18:50.091913   29935 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0829 19:18:50.091937   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:18:50.095032   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:18:50.095451   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:18:50.095478   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:18:50.095622   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:18:50.095766   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:18:50.095939   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:18:50.096079   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:18:50.240803   29935 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:18:50.240856   29935 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token o9vyfs.13tw66289wnr77dl --discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-505269-m02 --control-plane --apiserver-advertise-address=192.168.39.68 --apiserver-bind-port=8443"
	I0829 19:19:11.793044   29935 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token o9vyfs.13tw66289wnr77dl --discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-505269-m02 --control-plane --apiserver-advertise-address=192.168.39.68 --apiserver-bind-port=8443": (21.552162288s)
	I0829 19:19:11.793081   29935 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0829 19:19:12.389852   29935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-505269-m02 minikube.k8s.io/updated_at=2024_08_29T19_19_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033 minikube.k8s.io/name=ha-505269 minikube.k8s.io/primary=false
	I0829 19:19:12.503770   29935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-505269-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0829 19:19:12.635366   29935 start.go:319] duration metric: took 22.543571284s to joinCluster
	I0829 19:19:12.635478   29935 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:19:12.635736   29935 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:19:12.637310   29935 out.go:177] * Verifying Kubernetes components...
	I0829 19:19:12.638479   29935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:19:12.927514   29935 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:19:12.982846   29935 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 19:19:12.983101   29935 kapi.go:59] client config for ha-505269: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.crt", KeyFile:"/home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.key", CAFile:"/home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0829 19:19:12.983162   29935 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.56:8443
	I0829 19:19:12.983379   29935 node_ready.go:35] waiting up to 6m0s for node "ha-505269-m02" to be "Ready" ...
	I0829 19:19:12.983497   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:12.983507   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:12.983518   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:12.983525   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:12.992919   29935 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0829 19:19:13.484291   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:13.484317   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:13.484328   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:13.484336   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:13.493976   29935 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0829 19:19:13.983790   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:13.983815   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:13.983826   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:13.983831   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:13.988882   29935 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0829 19:19:14.484067   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:14.484129   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:14.484149   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:14.484155   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:14.488475   29935 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 19:19:14.984566   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:14.984588   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:14.984599   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:14.984605   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:14.988346   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:14.989185   29935 node_ready.go:53] node "ha-505269-m02" has status "Ready":"False"
	I0829 19:19:15.483599   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:15.483623   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:15.483630   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:15.483635   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:15.487458   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:15.983970   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:15.983994   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:15.984005   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:15.984010   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:15.987359   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:16.484518   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:16.484547   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:16.484557   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:16.484562   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:16.488208   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:16.983957   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:16.983981   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:16.983989   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:16.984000   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:16.987155   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:17.484251   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:17.484278   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:17.484290   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:17.484297   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:17.487514   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:17.488266   29935 node_ready.go:53] node "ha-505269-m02" has status "Ready":"False"
	I0829 19:19:17.983838   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:17.983859   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:17.983866   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:17.983871   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:17.987448   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:18.484362   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:18.484382   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:18.484390   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:18.484395   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:18.487739   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:18.984627   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:18.984651   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:18.984662   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:18.984670   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:18.988075   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:19.483831   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:19.483853   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:19.483859   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:19.483862   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:19.491210   29935 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0829 19:19:19.491993   29935 node_ready.go:53] node "ha-505269-m02" has status "Ready":"False"
	I0829 19:19:19.984493   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:19.984517   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:19.984527   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:19.984533   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:19.988095   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:20.484478   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:20.484500   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:20.484508   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:20.484513   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:20.487975   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:20.984009   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:20.984038   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:20.984049   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:20.984057   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:20.987900   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:21.484630   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:21.484651   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:21.484662   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:21.484667   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:21.487892   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:21.983902   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:21.983923   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:21.983931   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:21.983936   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:21.986923   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:19:21.987597   29935 node_ready.go:53] node "ha-505269-m02" has status "Ready":"False"
	I0829 19:19:22.484420   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:22.484442   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:22.484449   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:22.484452   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:22.488097   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:22.984539   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:22.984564   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:22.984583   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:22.984588   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:23.021416   29935 round_trippers.go:574] Response Status: 200 OK in 36 milliseconds
	I0829 19:19:23.484247   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:23.484271   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:23.484280   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:23.484284   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:23.487446   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:23.984448   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:23.984471   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:23.984479   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:23.984482   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:23.987793   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:23.988463   29935 node_ready.go:53] node "ha-505269-m02" has status "Ready":"False"
	I0829 19:19:24.483764   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:24.483793   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:24.483804   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:24.483809   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:24.487081   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:24.983915   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:24.983940   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:24.983951   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:24.983959   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:24.987236   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:25.483697   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:25.483721   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:25.483732   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:25.483739   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:25.486975   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:25.983859   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:25.983881   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:25.983894   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:25.983900   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:25.987215   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:26.484055   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:26.484078   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:26.484086   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:26.484090   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:26.487410   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:26.487936   29935 node_ready.go:53] node "ha-505269-m02" has status "Ready":"False"
	I0829 19:19:26.984403   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:26.984429   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:26.984437   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:26.984441   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:26.987729   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:27.484140   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:27.484164   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:27.484172   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:27.484177   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:27.487346   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:27.984424   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:27.984444   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:27.984452   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:27.984457   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:27.987693   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:28.484284   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:28.484309   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:28.484317   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:28.484320   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:28.487409   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:28.984452   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:28.984478   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:28.984489   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:28.984495   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:28.987627   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:28.988318   29935 node_ready.go:53] node "ha-505269-m02" has status "Ready":"False"
	I0829 19:19:29.483632   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:29.483654   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:29.483663   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:29.483668   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:29.488370   29935 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 19:19:29.984454   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:29.984488   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:29.984497   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:29.984501   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:29.988205   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:30.483741   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:30.483770   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:30.483777   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:30.483780   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:30.487393   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:30.488144   29935 node_ready.go:49] node "ha-505269-m02" has status "Ready":"True"
	I0829 19:19:30.488158   29935 node_ready.go:38] duration metric: took 17.504748341s for node "ha-505269-m02" to be "Ready" ...
	I0829 19:19:30.488168   29935 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:19:30.488244   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods
	I0829 19:19:30.488255   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:30.488265   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:30.488272   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:30.493138   29935 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 19:19:30.499875   29935 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-bqqq5" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:30.499961   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-bqqq5
	I0829 19:19:30.499974   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:30.499983   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:30.499987   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:30.502951   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:19:30.503549   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:19:30.503564   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:30.503570   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:30.503574   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:30.508279   29935 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 19:19:30.508759   29935 pod_ready.go:93] pod "coredns-6f6b679f8f-bqqq5" in "kube-system" namespace has status "Ready":"True"
	I0829 19:19:30.508774   29935 pod_ready.go:82] duration metric: took 8.878644ms for pod "coredns-6f6b679f8f-bqqq5" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:30.508781   29935 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-qjgfg" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:30.508820   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qjgfg
	I0829 19:19:30.508828   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:30.508835   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:30.508838   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:30.515929   29935 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0829 19:19:30.516565   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:19:30.516586   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:30.516594   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:30.516602   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:30.522485   29935 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0829 19:19:30.522984   29935 pod_ready.go:93] pod "coredns-6f6b679f8f-qjgfg" in "kube-system" namespace has status "Ready":"True"
	I0829 19:19:30.523000   29935 pod_ready.go:82] duration metric: took 14.212396ms for pod "coredns-6f6b679f8f-qjgfg" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:30.523009   29935 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:30.523050   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/etcd-ha-505269
	I0829 19:19:30.523057   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:30.523063   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:30.523067   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:30.526363   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:30.526920   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:19:30.526932   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:30.526938   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:30.526942   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:30.530127   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:30.530679   29935 pod_ready.go:93] pod "etcd-ha-505269" in "kube-system" namespace has status "Ready":"True"
	I0829 19:19:30.530695   29935 pod_ready.go:82] duration metric: took 7.679883ms for pod "etcd-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:30.530703   29935 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:30.530751   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/etcd-ha-505269-m02
	I0829 19:19:30.530761   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:30.530770   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:30.530780   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:30.533168   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:19:30.533634   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:30.533647   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:30.533653   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:30.533659   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:30.535697   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:19:30.536288   29935 pod_ready.go:93] pod "etcd-ha-505269-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 19:19:30.536302   29935 pod_ready.go:82] duration metric: took 5.593438ms for pod "etcd-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:30.536319   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:30.684671   29935 request.go:632] Waited for 148.298173ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-505269
	I0829 19:19:30.684742   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-505269
	I0829 19:19:30.684747   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:30.684754   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:30.684760   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:30.690261   29935 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0829 19:19:30.883917   29935 request.go:632] Waited for 192.282533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:19:30.883962   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:19:30.883967   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:30.883974   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:30.883979   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:30.886862   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:19:30.887428   29935 pod_ready.go:93] pod "kube-apiserver-ha-505269" in "kube-system" namespace has status "Ready":"True"
	I0829 19:19:30.887445   29935 pod_ready.go:82] duration metric: took 351.117841ms for pod "kube-apiserver-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:30.887454   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:31.084614   29935 request.go:632] Waited for 197.107835ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-505269-m02
	I0829 19:19:31.084666   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-505269-m02
	I0829 19:19:31.084674   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:31.084684   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:31.084696   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:31.088180   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:31.284480   29935 request.go:632] Waited for 195.381883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:31.284527   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:31.284532   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:31.284539   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:31.284549   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:31.288024   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:31.288495   29935 pod_ready.go:93] pod "kube-apiserver-ha-505269-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 19:19:31.288514   29935 pod_ready.go:82] duration metric: took 401.053747ms for pod "kube-apiserver-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:31.288523   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:31.484606   29935 request.go:632] Waited for 196.009661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-505269
	I0829 19:19:31.484673   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-505269
	I0829 19:19:31.484680   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:31.484690   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:31.484695   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:31.488146   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:31.683934   29935 request.go:632] Waited for 195.278493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:19:31.683985   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:19:31.683990   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:31.684000   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:31.684003   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:31.687022   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:31.687701   29935 pod_ready.go:93] pod "kube-controller-manager-ha-505269" in "kube-system" namespace has status "Ready":"True"
	I0829 19:19:31.687722   29935 pod_ready.go:82] duration metric: took 399.189881ms for pod "kube-controller-manager-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:31.687731   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:31.884785   29935 request.go:632] Waited for 196.973338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-505269-m02
	I0829 19:19:31.884849   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-505269-m02
	I0829 19:19:31.884857   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:31.884868   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:31.884875   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:31.888378   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:32.084563   29935 request.go:632] Waited for 195.362829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:32.084622   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:32.084629   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:32.084637   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:32.084648   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:32.088093   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:32.088602   29935 pod_ready.go:93] pod "kube-controller-manager-ha-505269-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 19:19:32.088626   29935 pod_ready.go:82] duration metric: took 400.88696ms for pod "kube-controller-manager-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:32.088640   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hx822" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:32.284094   29935 request.go:632] Waited for 195.356676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hx822
	I0829 19:19:32.284150   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hx822
	I0829 19:19:32.284157   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:32.284168   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:32.284179   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:32.287149   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:19:32.484168   29935 request.go:632] Waited for 196.353084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:19:32.484222   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:19:32.484227   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:32.484241   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:32.484257   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:32.488123   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:32.488976   29935 pod_ready.go:93] pod "kube-proxy-hx822" in "kube-system" namespace has status "Ready":"True"
	I0829 19:19:32.488993   29935 pod_ready.go:82] duration metric: took 400.347039ms for pod "kube-proxy-hx822" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:32.489003   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jxbdt" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:32.683938   29935 request.go:632] Waited for 194.87159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jxbdt
	I0829 19:19:32.684002   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jxbdt
	I0829 19:19:32.684007   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:32.684015   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:32.684023   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:32.687308   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:32.884279   29935 request.go:632] Waited for 196.339846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:32.884354   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:32.884362   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:32.884372   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:32.884379   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:32.887930   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:32.888659   29935 pod_ready.go:93] pod "kube-proxy-jxbdt" in "kube-system" namespace has status "Ready":"True"
	I0829 19:19:32.888685   29935 pod_ready.go:82] duration metric: took 399.676636ms for pod "kube-proxy-jxbdt" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:32.888696   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:33.084207   29935 request.go:632] Waited for 195.432627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-505269
	I0829 19:19:33.084287   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-505269
	I0829 19:19:33.084295   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:33.084306   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:33.084317   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:33.087317   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:19:33.284337   29935 request.go:632] Waited for 196.351799ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:19:33.284392   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:19:33.284400   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:33.284408   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:33.284415   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:33.287740   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:33.288309   29935 pod_ready.go:93] pod "kube-scheduler-ha-505269" in "kube-system" namespace has status "Ready":"True"
	I0829 19:19:33.288324   29935 pod_ready.go:82] duration metric: took 399.621276ms for pod "kube-scheduler-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:33.288333   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:33.484379   29935 request.go:632] Waited for 195.988696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-505269-m02
	I0829 19:19:33.484451   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-505269-m02
	I0829 19:19:33.484457   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:33.484464   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:33.484468   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:33.487371   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:19:33.684465   29935 request.go:632] Waited for 196.405519ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:33.684545   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:19:33.684553   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:33.684563   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:33.684574   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:33.687743   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:33.688674   29935 pod_ready.go:93] pod "kube-scheduler-ha-505269-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 19:19:33.688690   29935 pod_ready.go:82] duration metric: took 400.349035ms for pod "kube-scheduler-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:19:33.688700   29935 pod_ready.go:39] duration metric: took 3.200517364s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:19:33.688713   29935 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:19:33.688765   29935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:19:33.706267   29935 api_server.go:72] duration metric: took 21.070749792s to wait for apiserver process to appear ...
	I0829 19:19:33.706290   29935 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:19:33.706306   29935 api_server.go:253] Checking apiserver healthz at https://192.168.39.56:8443/healthz ...
	I0829 19:19:33.713640   29935 api_server.go:279] https://192.168.39.56:8443/healthz returned 200:
	ok
	I0829 19:19:33.713698   29935 round_trippers.go:463] GET https://192.168.39.56:8443/version
	I0829 19:19:33.713703   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:33.713710   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:33.713715   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:33.714439   29935 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0829 19:19:33.714550   29935 api_server.go:141] control plane version: v1.31.0
	I0829 19:19:33.714566   29935 api_server.go:131] duration metric: took 8.270936ms to wait for apiserver health ...
	I0829 19:19:33.714612   29935 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:19:33.883953   29935 request.go:632] Waited for 169.270255ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods
	I0829 19:19:33.884014   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods
	I0829 19:19:33.884021   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:33.884028   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:33.884032   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:33.889882   29935 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0829 19:19:33.896061   29935 system_pods.go:59] 17 kube-system pods found
	I0829 19:19:33.896086   29935 system_pods.go:61] "coredns-6f6b679f8f-bqqq5" [801d9cfa-e1ad-4b31-9803-0030543fdc9e] Running
	I0829 19:19:33.896090   29935 system_pods.go:61] "coredns-6f6b679f8f-qjgfg" [12168097-2d3c-467a-b4b5-c0ca7f85e4eb] Running
	I0829 19:19:33.896094   29935 system_pods.go:61] "etcd-ha-505269" [a9cd644c-66f8-419a-be0c-615fc97daf18] Running
	I0829 19:19:33.896098   29935 system_pods.go:61] "etcd-ha-505269-m02" [864d2e94-62a9-4171-87bc-7ec5a3fc6224] Running
	I0829 19:19:33.896101   29935 system_pods.go:61] "kindnet-7rp6z" [7c922b32-e666-4b00-ab65-505632346112] Running
	I0829 19:19:33.896105   29935 system_pods.go:61] "kindnet-sthc8" [3c5a7487-a1b8-4acc-9462-84a2b478f46b] Running
	I0829 19:19:33.896109   29935 system_pods.go:61] "kube-apiserver-ha-505269" [616e3cf5-709a-46a8-8d71-0e709d297ca0] Running
	I0829 19:19:33.896112   29935 system_pods.go:61] "kube-apiserver-ha-505269-m02" [8615f4df-4f47-451a-80c8-d50826a75738] Running
	I0829 19:19:33.896116   29935 system_pods.go:61] "kube-controller-manager-ha-505269" [3f81751f-e12f-4a70-a901-db586a66461e] Running
	I0829 19:19:33.896119   29935 system_pods.go:61] "kube-controller-manager-ha-505269-m02" [b0587260-4827-47eb-a3b7-afb5b1fad59b] Running
	I0829 19:19:33.896125   29935 system_pods.go:61] "kube-proxy-hx822" [e88a504e-122b-4609-a0cc-4ad3115b3e4e] Running
	I0829 19:19:33.896127   29935 system_pods.go:61] "kube-proxy-jxbdt" [e51729e9-d662-4ea2-9a4f-85f77b269dea] Running
	I0829 19:19:33.896133   29935 system_pods.go:61] "kube-scheduler-ha-505269" [c573cfd8-20ba-46ce-8c0f-b610240ab78d] Running
	I0829 19:19:33.896136   29935 system_pods.go:61] "kube-scheduler-ha-505269-m02" [ba4e7eec-baaa-4c92-84f2-ac50629fea20] Running
	I0829 19:19:33.896139   29935 system_pods.go:61] "kube-vip-ha-505269" [d1734801-9573-45b3-a4a0-9ac45c093b95] Running
	I0829 19:19:33.896143   29935 system_pods.go:61] "kube-vip-ha-505269-m02" [f33d8dab-fb6f-46cf-b508-1e0eae03cad2] Running
	I0829 19:19:33.896145   29935 system_pods.go:61] "storage-provisioner" [6b7cd00a-94da-4e42-b7ae-289aab759c4f] Running
	I0829 19:19:33.896151   29935 system_pods.go:74] duration metric: took 181.530307ms to wait for pod list to return data ...
	I0829 19:19:33.896160   29935 default_sa.go:34] waiting for default service account to be created ...
	I0829 19:19:34.084558   29935 request.go:632] Waited for 188.329888ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/default/serviceaccounts
	I0829 19:19:34.084607   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/default/serviceaccounts
	I0829 19:19:34.084612   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:34.084618   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:34.084622   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:34.088605   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:34.088813   29935 default_sa.go:45] found service account: "default"
	I0829 19:19:34.088827   29935 default_sa.go:55] duration metric: took 192.662195ms for default service account to be created ...
	I0829 19:19:34.088835   29935 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 19:19:34.284281   29935 request.go:632] Waited for 195.35955ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods
	I0829 19:19:34.284349   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods
	I0829 19:19:34.284356   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:34.284368   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:34.284378   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:34.289986   29935 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0829 19:19:34.295119   29935 system_pods.go:86] 17 kube-system pods found
	I0829 19:19:34.295144   29935 system_pods.go:89] "coredns-6f6b679f8f-bqqq5" [801d9cfa-e1ad-4b31-9803-0030543fdc9e] Running
	I0829 19:19:34.295149   29935 system_pods.go:89] "coredns-6f6b679f8f-qjgfg" [12168097-2d3c-467a-b4b5-c0ca7f85e4eb] Running
	I0829 19:19:34.295153   29935 system_pods.go:89] "etcd-ha-505269" [a9cd644c-66f8-419a-be0c-615fc97daf18] Running
	I0829 19:19:34.295158   29935 system_pods.go:89] "etcd-ha-505269-m02" [864d2e94-62a9-4171-87bc-7ec5a3fc6224] Running
	I0829 19:19:34.295162   29935 system_pods.go:89] "kindnet-7rp6z" [7c922b32-e666-4b00-ab65-505632346112] Running
	I0829 19:19:34.295166   29935 system_pods.go:89] "kindnet-sthc8" [3c5a7487-a1b8-4acc-9462-84a2b478f46b] Running
	I0829 19:19:34.295170   29935 system_pods.go:89] "kube-apiserver-ha-505269" [616e3cf5-709a-46a8-8d71-0e709d297ca0] Running
	I0829 19:19:34.295174   29935 system_pods.go:89] "kube-apiserver-ha-505269-m02" [8615f4df-4f47-451a-80c8-d50826a75738] Running
	I0829 19:19:34.295177   29935 system_pods.go:89] "kube-controller-manager-ha-505269" [3f81751f-e12f-4a70-a901-db586a66461e] Running
	I0829 19:19:34.295182   29935 system_pods.go:89] "kube-controller-manager-ha-505269-m02" [b0587260-4827-47eb-a3b7-afb5b1fad59b] Running
	I0829 19:19:34.295185   29935 system_pods.go:89] "kube-proxy-hx822" [e88a504e-122b-4609-a0cc-4ad3115b3e4e] Running
	I0829 19:19:34.295188   29935 system_pods.go:89] "kube-proxy-jxbdt" [e51729e9-d662-4ea2-9a4f-85f77b269dea] Running
	I0829 19:19:34.295192   29935 system_pods.go:89] "kube-scheduler-ha-505269" [c573cfd8-20ba-46ce-8c0f-b610240ab78d] Running
	I0829 19:19:34.295197   29935 system_pods.go:89] "kube-scheduler-ha-505269-m02" [ba4e7eec-baaa-4c92-84f2-ac50629fea20] Running
	I0829 19:19:34.295200   29935 system_pods.go:89] "kube-vip-ha-505269" [d1734801-9573-45b3-a4a0-9ac45c093b95] Running
	I0829 19:19:34.295206   29935 system_pods.go:89] "kube-vip-ha-505269-m02" [f33d8dab-fb6f-46cf-b508-1e0eae03cad2] Running
	I0829 19:19:34.295209   29935 system_pods.go:89] "storage-provisioner" [6b7cd00a-94da-4e42-b7ae-289aab759c4f] Running
	I0829 19:19:34.295215   29935 system_pods.go:126] duration metric: took 206.371606ms to wait for k8s-apps to be running ...
	I0829 19:19:34.295225   29935 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 19:19:34.295268   29935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:19:34.311065   29935 system_svc.go:56] duration metric: took 15.831595ms WaitForService to wait for kubelet
	I0829 19:19:34.311100   29935 kubeadm.go:582] duration metric: took 21.675585259s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:19:34.311123   29935 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:19:34.484496   29935 request.go:632] Waited for 173.285409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes
	I0829 19:19:34.484555   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes
	I0829 19:19:34.484560   29935 round_trippers.go:469] Request Headers:
	I0829 19:19:34.484568   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:19:34.484571   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:19:34.488457   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:19:34.489152   29935 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:19:34.489175   29935 node_conditions.go:123] node cpu capacity is 2
	I0829 19:19:34.489184   29935 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:19:34.489189   29935 node_conditions.go:123] node cpu capacity is 2
	I0829 19:19:34.489193   29935 node_conditions.go:105] duration metric: took 178.064585ms to run NodePressure ...
	I0829 19:19:34.489204   29935 start.go:241] waiting for startup goroutines ...
	I0829 19:19:34.489228   29935 start.go:255] writing updated cluster config ...
	I0829 19:19:34.491344   29935 out.go:201] 
	I0829 19:19:34.492757   29935 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:19:34.492850   29935 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/config.json ...
	I0829 19:19:34.494583   29935 out.go:177] * Starting "ha-505269-m03" control-plane node in "ha-505269" cluster
	I0829 19:19:34.495772   29935 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:19:34.495797   29935 cache.go:56] Caching tarball of preloaded images
	I0829 19:19:34.495907   29935 preload.go:172] Found /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 19:19:34.495920   29935 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 19:19:34.496003   29935 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/config.json ...
	I0829 19:19:34.496169   29935 start.go:360] acquireMachinesLock for ha-505269-m03: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:19:34.496212   29935 start.go:364] duration metric: took 25.021µs to acquireMachinesLock for "ha-505269-m03"
	I0829 19:19:34.496231   29935 start.go:93] Provisioning new machine with config: &{Name:ha-505269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-505269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:19:34.496318   29935 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0829 19:19:34.497674   29935 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 19:19:34.497749   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:19:34.497779   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:19:34.513140   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40899
	I0829 19:19:34.513609   29935 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:19:34.514070   29935 main.go:141] libmachine: Using API Version  1
	I0829 19:19:34.514096   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:19:34.514435   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:19:34.514610   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetMachineName
	I0829 19:19:34.514815   29935 main.go:141] libmachine: (ha-505269-m03) Calling .DriverName
	I0829 19:19:34.514990   29935 start.go:159] libmachine.API.Create for "ha-505269" (driver="kvm2")
	I0829 19:19:34.515017   29935 client.go:168] LocalClient.Create starting
	I0829 19:19:34.515054   29935 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem
	I0829 19:19:34.515091   29935 main.go:141] libmachine: Decoding PEM data...
	I0829 19:19:34.515117   29935 main.go:141] libmachine: Parsing certificate...
	I0829 19:19:34.515171   29935 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem
	I0829 19:19:34.515190   29935 main.go:141] libmachine: Decoding PEM data...
	I0829 19:19:34.515200   29935 main.go:141] libmachine: Parsing certificate...
	I0829 19:19:34.515214   29935 main.go:141] libmachine: Running pre-create checks...
	I0829 19:19:34.515221   29935 main.go:141] libmachine: (ha-505269-m03) Calling .PreCreateCheck
	I0829 19:19:34.515379   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetConfigRaw
	I0829 19:19:34.515773   29935 main.go:141] libmachine: Creating machine...
	I0829 19:19:34.515791   29935 main.go:141] libmachine: (ha-505269-m03) Calling .Create
	I0829 19:19:34.515960   29935 main.go:141] libmachine: (ha-505269-m03) Creating KVM machine...
	I0829 19:19:34.517392   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found existing default KVM network
	I0829 19:19:34.517528   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found existing private KVM network mk-ha-505269
	I0829 19:19:34.517662   29935 main.go:141] libmachine: (ha-505269-m03) Setting up store path in /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03 ...
	I0829 19:19:34.517679   29935 main.go:141] libmachine: (ha-505269-m03) Building disk image from file:///home/jenkins/minikube-integration/19530-11185/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso
	I0829 19:19:34.517777   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:34.517674   31162 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 19:19:34.517832   29935 main.go:141] libmachine: (ha-505269-m03) Downloading /home/jenkins/minikube-integration/19530-11185/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19530-11185/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso...
	I0829 19:19:34.754916   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:34.754791   31162 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/id_rsa...
	I0829 19:19:35.034973   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:35.034871   31162 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/ha-505269-m03.rawdisk...
	I0829 19:19:35.035002   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Writing magic tar header
	I0829 19:19:35.035012   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Writing SSH key tar header
	I0829 19:19:35.035027   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:35.034975   31162 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03 ...
	I0829 19:19:35.035107   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03
	I0829 19:19:35.035128   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube/machines
	I0829 19:19:35.035137   29935 main.go:141] libmachine: (ha-505269-m03) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03 (perms=drwx------)
	I0829 19:19:35.035149   29935 main.go:141] libmachine: (ha-505269-m03) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube/machines (perms=drwxr-xr-x)
	I0829 19:19:35.035160   29935 main.go:141] libmachine: (ha-505269-m03) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube (perms=drwxr-xr-x)
	I0829 19:19:35.035178   29935 main.go:141] libmachine: (ha-505269-m03) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185 (perms=drwxrwxr-x)
	I0829 19:19:35.035190   29935 main.go:141] libmachine: (ha-505269-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0829 19:19:35.035205   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 19:19:35.035218   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185
	I0829 19:19:35.035231   29935 main.go:141] libmachine: (ha-505269-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0829 19:19:35.035246   29935 main.go:141] libmachine: (ha-505269-m03) Creating domain...
	I0829 19:19:35.035278   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0829 19:19:35.035305   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Checking permissions on dir: /home/jenkins
	I0829 19:19:35.035315   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Checking permissions on dir: /home
	I0829 19:19:35.035326   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Skipping /home - not owner
	I0829 19:19:35.036291   29935 main.go:141] libmachine: (ha-505269-m03) define libvirt domain using xml: 
	I0829 19:19:35.036315   29935 main.go:141] libmachine: (ha-505269-m03) <domain type='kvm'>
	I0829 19:19:35.036327   29935 main.go:141] libmachine: (ha-505269-m03)   <name>ha-505269-m03</name>
	I0829 19:19:35.036343   29935 main.go:141] libmachine: (ha-505269-m03)   <memory unit='MiB'>2200</memory>
	I0829 19:19:35.036355   29935 main.go:141] libmachine: (ha-505269-m03)   <vcpu>2</vcpu>
	I0829 19:19:35.036365   29935 main.go:141] libmachine: (ha-505269-m03)   <features>
	I0829 19:19:35.036371   29935 main.go:141] libmachine: (ha-505269-m03)     <acpi/>
	I0829 19:19:35.036376   29935 main.go:141] libmachine: (ha-505269-m03)     <apic/>
	I0829 19:19:35.036382   29935 main.go:141] libmachine: (ha-505269-m03)     <pae/>
	I0829 19:19:35.036389   29935 main.go:141] libmachine: (ha-505269-m03)     
	I0829 19:19:35.036394   29935 main.go:141] libmachine: (ha-505269-m03)   </features>
	I0829 19:19:35.036399   29935 main.go:141] libmachine: (ha-505269-m03)   <cpu mode='host-passthrough'>
	I0829 19:19:35.036426   29935 main.go:141] libmachine: (ha-505269-m03)   
	I0829 19:19:35.036449   29935 main.go:141] libmachine: (ha-505269-m03)   </cpu>
	I0829 19:19:35.036460   29935 main.go:141] libmachine: (ha-505269-m03)   <os>
	I0829 19:19:35.036474   29935 main.go:141] libmachine: (ha-505269-m03)     <type>hvm</type>
	I0829 19:19:35.036486   29935 main.go:141] libmachine: (ha-505269-m03)     <boot dev='cdrom'/>
	I0829 19:19:35.036493   29935 main.go:141] libmachine: (ha-505269-m03)     <boot dev='hd'/>
	I0829 19:19:35.036504   29935 main.go:141] libmachine: (ha-505269-m03)     <bootmenu enable='no'/>
	I0829 19:19:35.036510   29935 main.go:141] libmachine: (ha-505269-m03)   </os>
	I0829 19:19:35.036520   29935 main.go:141] libmachine: (ha-505269-m03)   <devices>
	I0829 19:19:35.036532   29935 main.go:141] libmachine: (ha-505269-m03)     <disk type='file' device='cdrom'>
	I0829 19:19:35.036549   29935 main.go:141] libmachine: (ha-505269-m03)       <source file='/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/boot2docker.iso'/>
	I0829 19:19:35.036564   29935 main.go:141] libmachine: (ha-505269-m03)       <target dev='hdc' bus='scsi'/>
	I0829 19:19:35.036580   29935 main.go:141] libmachine: (ha-505269-m03)       <readonly/>
	I0829 19:19:35.036590   29935 main.go:141] libmachine: (ha-505269-m03)     </disk>
	I0829 19:19:35.036601   29935 main.go:141] libmachine: (ha-505269-m03)     <disk type='file' device='disk'>
	I0829 19:19:35.036614   29935 main.go:141] libmachine: (ha-505269-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0829 19:19:35.036629   29935 main.go:141] libmachine: (ha-505269-m03)       <source file='/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/ha-505269-m03.rawdisk'/>
	I0829 19:19:35.036644   29935 main.go:141] libmachine: (ha-505269-m03)       <target dev='hda' bus='virtio'/>
	I0829 19:19:35.036655   29935 main.go:141] libmachine: (ha-505269-m03)     </disk>
	I0829 19:19:35.036666   29935 main.go:141] libmachine: (ha-505269-m03)     <interface type='network'>
	I0829 19:19:35.036679   29935 main.go:141] libmachine: (ha-505269-m03)       <source network='mk-ha-505269'/>
	I0829 19:19:35.036690   29935 main.go:141] libmachine: (ha-505269-m03)       <model type='virtio'/>
	I0829 19:19:35.036698   29935 main.go:141] libmachine: (ha-505269-m03)     </interface>
	I0829 19:19:35.036709   29935 main.go:141] libmachine: (ha-505269-m03)     <interface type='network'>
	I0829 19:19:35.036734   29935 main.go:141] libmachine: (ha-505269-m03)       <source network='default'/>
	I0829 19:19:35.036754   29935 main.go:141] libmachine: (ha-505269-m03)       <model type='virtio'/>
	I0829 19:19:35.036761   29935 main.go:141] libmachine: (ha-505269-m03)     </interface>
	I0829 19:19:35.036771   29935 main.go:141] libmachine: (ha-505269-m03)     <serial type='pty'>
	I0829 19:19:35.036784   29935 main.go:141] libmachine: (ha-505269-m03)       <target port='0'/>
	I0829 19:19:35.036794   29935 main.go:141] libmachine: (ha-505269-m03)     </serial>
	I0829 19:19:35.036806   29935 main.go:141] libmachine: (ha-505269-m03)     <console type='pty'>
	I0829 19:19:35.036817   29935 main.go:141] libmachine: (ha-505269-m03)       <target type='serial' port='0'/>
	I0829 19:19:35.036838   29935 main.go:141] libmachine: (ha-505269-m03)     </console>
	I0829 19:19:35.036855   29935 main.go:141] libmachine: (ha-505269-m03)     <rng model='virtio'>
	I0829 19:19:35.036870   29935 main.go:141] libmachine: (ha-505269-m03)       <backend model='random'>/dev/random</backend>
	I0829 19:19:35.036882   29935 main.go:141] libmachine: (ha-505269-m03)     </rng>
	I0829 19:19:35.036894   29935 main.go:141] libmachine: (ha-505269-m03)     
	I0829 19:19:35.036908   29935 main.go:141] libmachine: (ha-505269-m03)     
	I0829 19:19:35.036919   29935 main.go:141] libmachine: (ha-505269-m03)   </devices>
	I0829 19:19:35.036928   29935 main.go:141] libmachine: (ha-505269-m03) </domain>
	I0829 19:19:35.036938   29935 main.go:141] libmachine: (ha-505269-m03) 
	I0829 19:19:35.044327   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:d2:e6:ee in network default
	I0829 19:19:35.044995   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:35.045016   29935 main.go:141] libmachine: (ha-505269-m03) Ensuring networks are active...
	I0829 19:19:35.045932   29935 main.go:141] libmachine: (ha-505269-m03) Ensuring network default is active
	I0829 19:19:35.046213   29935 main.go:141] libmachine: (ha-505269-m03) Ensuring network mk-ha-505269 is active
	I0829 19:19:35.046880   29935 main.go:141] libmachine: (ha-505269-m03) Getting domain xml...
	I0829 19:19:35.047792   29935 main.go:141] libmachine: (ha-505269-m03) Creating domain...
	I0829 19:19:36.274511   29935 main.go:141] libmachine: (ha-505269-m03) Waiting to get IP...
	I0829 19:19:36.275284   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:36.275653   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find current IP address of domain ha-505269-m03 in network mk-ha-505269
	I0829 19:19:36.275706   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:36.275639   31162 retry.go:31] will retry after 188.20001ms: waiting for machine to come up
	I0829 19:19:36.465046   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:36.465597   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find current IP address of domain ha-505269-m03 in network mk-ha-505269
	I0829 19:19:36.465624   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:36.465548   31162 retry.go:31] will retry after 377.645185ms: waiting for machine to come up
	I0829 19:19:36.845154   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:36.845635   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find current IP address of domain ha-505269-m03 in network mk-ha-505269
	I0829 19:19:36.845661   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:36.845594   31162 retry.go:31] will retry after 347.332502ms: waiting for machine to come up
	I0829 19:19:37.194034   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:37.194449   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find current IP address of domain ha-505269-m03 in network mk-ha-505269
	I0829 19:19:37.194477   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:37.194403   31162 retry.go:31] will retry after 437.184773ms: waiting for machine to come up
	I0829 19:19:37.632850   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:37.633345   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find current IP address of domain ha-505269-m03 in network mk-ha-505269
	I0829 19:19:37.633373   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:37.633290   31162 retry.go:31] will retry after 668.581024ms: waiting for machine to come up
	I0829 19:19:38.302978   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:38.303421   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find current IP address of domain ha-505269-m03 in network mk-ha-505269
	I0829 19:19:38.303449   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:38.303345   31162 retry.go:31] will retry after 789.404428ms: waiting for machine to come up
	I0829 19:19:39.094663   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:39.095132   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find current IP address of domain ha-505269-m03 in network mk-ha-505269
	I0829 19:19:39.095159   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:39.095084   31162 retry.go:31] will retry after 835.70112ms: waiting for machine to come up
	I0829 19:19:39.932361   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:39.932837   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find current IP address of domain ha-505269-m03 in network mk-ha-505269
	I0829 19:19:39.932863   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:39.932799   31162 retry.go:31] will retry after 963.297624ms: waiting for machine to come up
	I0829 19:19:40.897752   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:40.898217   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find current IP address of domain ha-505269-m03 in network mk-ha-505269
	I0829 19:19:40.898246   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:40.898130   31162 retry.go:31] will retry after 1.412076203s: waiting for machine to come up
	I0829 19:19:42.311273   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:42.311695   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find current IP address of domain ha-505269-m03 in network mk-ha-505269
	I0829 19:19:42.311736   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:42.311680   31162 retry.go:31] will retry after 2.08425845s: waiting for machine to come up
	I0829 19:19:44.398798   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:44.399233   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find current IP address of domain ha-505269-m03 in network mk-ha-505269
	I0829 19:19:44.399261   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:44.399180   31162 retry.go:31] will retry after 2.054798813s: waiting for machine to come up
	I0829 19:19:46.457039   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:46.457497   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find current IP address of domain ha-505269-m03 in network mk-ha-505269
	I0829 19:19:46.457524   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:46.457442   31162 retry.go:31] will retry after 3.122897743s: waiting for machine to come up
	I0829 19:19:49.582281   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:49.582738   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find current IP address of domain ha-505269-m03 in network mk-ha-505269
	I0829 19:19:49.582760   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:49.582697   31162 retry.go:31] will retry after 4.485998189s: waiting for machine to come up
	I0829 19:19:54.071658   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:54.072099   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find current IP address of domain ha-505269-m03 in network mk-ha-505269
	I0829 19:19:54.072122   29935 main.go:141] libmachine: (ha-505269-m03) DBG | I0829 19:19:54.072050   31162 retry.go:31] will retry after 5.029713513s: waiting for machine to come up
	I0829 19:19:59.107198   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:59.107710   29935 main.go:141] libmachine: (ha-505269-m03) Found IP for machine: 192.168.39.178
	I0829 19:19:59.107734   29935 main.go:141] libmachine: (ha-505269-m03) Reserving static IP address...
	I0829 19:19:59.107750   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has current primary IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:59.108024   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find host DHCP lease matching {name: "ha-505269-m03", mac: "52:54:00:19:9f:90", ip: "192.168.39.178"} in network mk-ha-505269
	I0829 19:19:59.181621   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Getting to WaitForSSH function...
	I0829 19:19:59.181650   29935 main.go:141] libmachine: (ha-505269-m03) Reserved static IP address: 192.168.39.178
	I0829 19:19:59.181664   29935 main.go:141] libmachine: (ha-505269-m03) Waiting for SSH to be available...
	I0829 19:19:59.184315   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:19:59.184871   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269
	I0829 19:19:59.184905   29935 main.go:141] libmachine: (ha-505269-m03) DBG | unable to find defined IP address of network mk-ha-505269 interface with MAC address 52:54:00:19:9f:90
	I0829 19:19:59.185069   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Using SSH client type: external
	I0829 19:19:59.185089   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/id_rsa (-rw-------)
	I0829 19:19:59.185116   29935 main.go:141] libmachine: (ha-505269-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:19:59.185133   29935 main.go:141] libmachine: (ha-505269-m03) DBG | About to run SSH command:
	I0829 19:19:59.185147   29935 main.go:141] libmachine: (ha-505269-m03) DBG | exit 0
	I0829 19:19:59.189624   29935 main.go:141] libmachine: (ha-505269-m03) DBG | SSH cmd err, output: exit status 255: 
	I0829 19:19:59.189654   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0829 19:19:59.189680   29935 main.go:141] libmachine: (ha-505269-m03) DBG | command : exit 0
	I0829 19:19:59.189697   29935 main.go:141] libmachine: (ha-505269-m03) DBG | err     : exit status 255
	I0829 19:19:59.189730   29935 main.go:141] libmachine: (ha-505269-m03) DBG | output  : 
	I0829 19:20:02.192226   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Getting to WaitForSSH function...
	I0829 19:20:02.194840   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.195264   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:02.195313   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.195430   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Using SSH client type: external
	I0829 19:20:02.195451   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/id_rsa (-rw-------)
	I0829 19:20:02.195479   29935 main.go:141] libmachine: (ha-505269-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.178 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:20:02.195497   29935 main.go:141] libmachine: (ha-505269-m03) DBG | About to run SSH command:
	I0829 19:20:02.195511   29935 main.go:141] libmachine: (ha-505269-m03) DBG | exit 0
	I0829 19:20:02.318896   29935 main.go:141] libmachine: (ha-505269-m03) DBG | SSH cmd err, output: <nil>: 
	I0829 19:20:02.319159   29935 main.go:141] libmachine: (ha-505269-m03) KVM machine creation complete!
	I0829 19:20:02.319514   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetConfigRaw
	I0829 19:20:02.320111   29935 main.go:141] libmachine: (ha-505269-m03) Calling .DriverName
	I0829 19:20:02.320288   29935 main.go:141] libmachine: (ha-505269-m03) Calling .DriverName
	I0829 19:20:02.320462   29935 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0829 19:20:02.320475   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetState
	I0829 19:20:02.321723   29935 main.go:141] libmachine: Detecting operating system of created instance...
	I0829 19:20:02.321741   29935 main.go:141] libmachine: Waiting for SSH to be available...
	I0829 19:20:02.321750   29935 main.go:141] libmachine: Getting to WaitForSSH function...
	I0829 19:20:02.321758   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:20:02.323988   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.324379   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:02.324406   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.324527   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:20:02.324708   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:02.324871   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:02.325007   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:20:02.325169   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:20:02.325450   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0829 19:20:02.325469   29935 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0829 19:20:02.426012   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:20:02.426032   29935 main.go:141] libmachine: Detecting the provisioner...
	I0829 19:20:02.426042   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:20:02.428681   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.429120   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:02.429150   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.429347   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:20:02.429550   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:02.429762   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:02.429961   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:20:02.430167   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:20:02.430373   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0829 19:20:02.430389   29935 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0829 19:20:02.531776   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0829 19:20:02.531840   29935 main.go:141] libmachine: found compatible host: buildroot
	I0829 19:20:02.531850   29935 main.go:141] libmachine: Provisioning with buildroot...
	I0829 19:20:02.531865   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetMachineName
	I0829 19:20:02.532087   29935 buildroot.go:166] provisioning hostname "ha-505269-m03"
	I0829 19:20:02.532119   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetMachineName
	I0829 19:20:02.532285   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:20:02.534809   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.535097   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:02.535119   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.535247   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:20:02.535448   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:02.535598   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:02.535740   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:20:02.535901   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:20:02.536117   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0829 19:20:02.536134   29935 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-505269-m03 && echo "ha-505269-m03" | sudo tee /etc/hostname
	I0829 19:20:02.654141   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-505269-m03
	
	I0829 19:20:02.654173   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:20:02.657113   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.657449   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:02.657469   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.657658   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:20:02.657833   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:02.657980   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:02.658091   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:20:02.658230   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:20:02.658457   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0829 19:20:02.658475   29935 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-505269-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-505269-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-505269-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:20:02.768285   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:20:02.768314   29935 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 19:20:02.768329   29935 buildroot.go:174] setting up certificates
	I0829 19:20:02.768338   29935 provision.go:84] configureAuth start
	I0829 19:20:02.768345   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetMachineName
	I0829 19:20:02.768666   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetIP
	I0829 19:20:02.771257   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.771586   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:02.771626   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.771805   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:20:02.773889   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.774291   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:02.774322   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.774561   29935 provision.go:143] copyHostCerts
	I0829 19:20:02.774604   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 19:20:02.774648   29935 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 19:20:02.774660   29935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 19:20:02.774743   29935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 19:20:02.774837   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 19:20:02.774860   29935 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 19:20:02.774867   29935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 19:20:02.774911   29935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 19:20:02.774977   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 19:20:02.775001   29935 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 19:20:02.775009   29935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 19:20:02.775042   29935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 19:20:02.775106   29935 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.ha-505269-m03 san=[127.0.0.1 192.168.39.178 ha-505269-m03 localhost minikube]
	I0829 19:20:02.955882   29935 provision.go:177] copyRemoteCerts
	I0829 19:20:02.955945   29935 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:20:02.955974   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:20:02.958280   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.958603   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:02.958635   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:02.958788   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:20:02.958970   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:02.959130   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:20:02.959302   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/id_rsa Username:docker}
	I0829 19:20:03.042340   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0829 19:20:03.042415   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0829 19:20:03.068167   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0829 19:20:03.068240   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 19:20:03.092191   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0829 19:20:03.092268   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 19:20:03.116038   29935 provision.go:87] duration metric: took 347.690012ms to configureAuth
	I0829 19:20:03.116064   29935 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:20:03.116313   29935 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:20:03.116404   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:20:03.118908   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.119293   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:03.119314   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.119534   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:20:03.119724   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:03.119905   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:03.120053   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:20:03.120217   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:20:03.120393   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0829 19:20:03.120413   29935 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:20:03.336641   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:20:03.336685   29935 main.go:141] libmachine: Checking connection to Docker...
	I0829 19:20:03.336696   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetURL
	I0829 19:20:03.337817   29935 main.go:141] libmachine: (ha-505269-m03) DBG | Using libvirt version 6000000
	I0829 19:20:03.340349   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.340747   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:03.340780   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.340980   29935 main.go:141] libmachine: Docker is up and running!
	I0829 19:20:03.340998   29935 main.go:141] libmachine: Reticulating splines...
	I0829 19:20:03.341006   29935 client.go:171] duration metric: took 28.825978143s to LocalClient.Create
	I0829 19:20:03.341032   29935 start.go:167] duration metric: took 28.826042143s to libmachine.API.Create "ha-505269"
	I0829 19:20:03.341043   29935 start.go:293] postStartSetup for "ha-505269-m03" (driver="kvm2")
	I0829 19:20:03.341052   29935 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:20:03.341069   29935 main.go:141] libmachine: (ha-505269-m03) Calling .DriverName
	I0829 19:20:03.341306   29935 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:20:03.341329   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:20:03.343890   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.344216   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:03.344238   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.344425   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:20:03.344631   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:03.344819   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:20:03.344999   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/id_rsa Username:docker}
	I0829 19:20:03.426048   29935 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:20:03.430609   29935 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:20:03.430634   29935 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 19:20:03.430691   29935 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 19:20:03.430759   29935 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 19:20:03.430769   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> /etc/ssl/certs/183612.pem
	I0829 19:20:03.430848   29935 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:20:03.440759   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 19:20:03.466905   29935 start.go:296] duration metric: took 125.851382ms for postStartSetup
	I0829 19:20:03.466952   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetConfigRaw
	I0829 19:20:03.467486   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetIP
	I0829 19:20:03.469857   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.470233   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:03.470258   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.470569   29935 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/config.json ...
	I0829 19:20:03.470776   29935 start.go:128] duration metric: took 28.974446004s to createHost
	I0829 19:20:03.470798   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:20:03.473302   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.473693   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:03.473717   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.473838   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:20:03.474033   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:03.474172   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:03.474321   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:20:03.474469   29935 main.go:141] libmachine: Using SSH client type: native
	I0829 19:20:03.474659   29935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0829 19:20:03.474670   29935 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:20:03.579542   29935 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724959203.561300378
	
	I0829 19:20:03.579563   29935 fix.go:216] guest clock: 1724959203.561300378
	I0829 19:20:03.579570   29935 fix.go:229] Guest: 2024-08-29 19:20:03.561300378 +0000 UTC Remote: 2024-08-29 19:20:03.470788126 +0000 UTC m=+155.545240327 (delta=90.512252ms)
	I0829 19:20:03.579584   29935 fix.go:200] guest clock delta is within tolerance: 90.512252ms
	I0829 19:20:03.579590   29935 start.go:83] releasing machines lock for "ha-505269-m03", held for 29.083368859s
	I0829 19:20:03.579612   29935 main.go:141] libmachine: (ha-505269-m03) Calling .DriverName
	I0829 19:20:03.579904   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetIP
	I0829 19:20:03.582421   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.582784   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:03.582812   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.584850   29935 out.go:177] * Found network options:
	I0829 19:20:03.586360   29935 out.go:177]   - NO_PROXY=192.168.39.56,192.168.39.68
	W0829 19:20:03.587710   29935 proxy.go:119] fail to check proxy env: Error ip not in block
	W0829 19:20:03.587727   29935 proxy.go:119] fail to check proxy env: Error ip not in block
	I0829 19:20:03.587738   29935 main.go:141] libmachine: (ha-505269-m03) Calling .DriverName
	I0829 19:20:03.588190   29935 main.go:141] libmachine: (ha-505269-m03) Calling .DriverName
	I0829 19:20:03.588357   29935 main.go:141] libmachine: (ha-505269-m03) Calling .DriverName
	I0829 19:20:03.588449   29935 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:20:03.588483   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	W0829 19:20:03.588489   29935 proxy.go:119] fail to check proxy env: Error ip not in block
	W0829 19:20:03.588506   29935 proxy.go:119] fail to check proxy env: Error ip not in block
	I0829 19:20:03.588571   29935 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:20:03.588585   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:20:03.591026   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.591256   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.591415   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:03.591434   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.591616   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:20:03.591734   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:03.591760   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:03.591787   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:03.591906   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:20:03.591977   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:20:03.592055   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:20:03.592126   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/id_rsa Username:docker}
	I0829 19:20:03.592210   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:20:03.592323   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/id_rsa Username:docker}
	I0829 19:20:03.822284   29935 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:20:03.829574   29935 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:20:03.829646   29935 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:20:03.845095   29935 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:20:03.845117   29935 start.go:495] detecting cgroup driver to use...
	I0829 19:20:03.845169   29935 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:20:03.861537   29935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:20:03.875877   29935 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:20:03.875931   29935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:20:03.889868   29935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:20:03.903770   29935 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:20:04.019620   29935 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:20:04.156255   29935 docker.go:233] disabling docker service ...
	I0829 19:20:04.156321   29935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:20:04.170173   29935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:20:04.183474   29935 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:20:04.331947   29935 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:20:04.448294   29935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:20:04.463547   29935 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:20:04.481774   29935 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:20:04.481831   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:20:04.492184   29935 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:20:04.492259   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:20:04.502875   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:20:04.513205   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:20:04.524936   29935 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:20:04.536110   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:20:04.547762   29935 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:20:04.566081   29935 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:20:04.577901   29935 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:20:04.588801   29935 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:20:04.588905   29935 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:20:04.604220   29935 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:20:04.615471   29935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:20:04.733353   29935 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:20:04.822230   29935 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:20:04.822306   29935 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:20:04.827551   29935 start.go:563] Will wait 60s for crictl version
	I0829 19:20:04.827605   29935 ssh_runner.go:195] Run: which crictl
	I0829 19:20:04.831455   29935 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:20:04.873126   29935 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:20:04.873208   29935 ssh_runner.go:195] Run: crio --version
	I0829 19:20:04.906487   29935 ssh_runner.go:195] Run: crio --version
	I0829 19:20:04.942315   29935 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:20:04.943637   29935 out.go:177]   - env NO_PROXY=192.168.39.56
	I0829 19:20:04.944984   29935 out.go:177]   - env NO_PROXY=192.168.39.56,192.168.39.68
	I0829 19:20:04.946328   29935 main.go:141] libmachine: (ha-505269-m03) Calling .GetIP
	I0829 19:20:04.948949   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:04.949286   29935 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:20:04.949316   29935 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:20:04.949508   29935 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 19:20:04.953593   29935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:20:04.966965   29935 mustload.go:65] Loading cluster: ha-505269
	I0829 19:20:04.967207   29935 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:20:04.967467   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:20:04.967500   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:20:04.981971   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34151
	I0829 19:20:04.982419   29935 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:20:04.982930   29935 main.go:141] libmachine: Using API Version  1
	I0829 19:20:04.982951   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:20:04.983267   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:20:04.983453   29935 main.go:141] libmachine: (ha-505269) Calling .GetState
	I0829 19:20:04.985106   29935 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:20:04.985385   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:20:04.985418   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:20:05.000699   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41629
	I0829 19:20:05.001114   29935 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:20:05.001584   29935 main.go:141] libmachine: Using API Version  1
	I0829 19:20:05.001608   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:20:05.001896   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:20:05.002065   29935 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:20:05.002305   29935 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269 for IP: 192.168.39.178
	I0829 19:20:05.002316   29935 certs.go:194] generating shared ca certs ...
	I0829 19:20:05.002330   29935 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:20:05.002440   29935 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 19:20:05.002485   29935 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 19:20:05.002494   29935 certs.go:256] generating profile certs ...
	I0829 19:20:05.002584   29935 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.key
	I0829 19:20:05.002609   29935 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.7f783c4d
	I0829 19:20:05.002623   29935 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.7f783c4d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.56 192.168.39.68 192.168.39.178 192.168.39.254]
	I0829 19:20:05.090673   29935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.7f783c4d ...
	I0829 19:20:05.090706   29935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.7f783c4d: {Name:mke661f346de8e27968b55f74b54bad926566b3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:20:05.090869   29935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.7f783c4d ...
	I0829 19:20:05.090883   29935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.7f783c4d: {Name:mk123b199a9849df290cd1ac008da6743489b006 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:20:05.090954   29935 certs.go:381] copying /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.7f783c4d -> /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt
	I0829 19:20:05.091086   29935 certs.go:385] copying /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.7f783c4d -> /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key
	I0829 19:20:05.091203   29935 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.key
	I0829 19:20:05.091218   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0829 19:20:05.091231   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0829 19:20:05.091241   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0829 19:20:05.091253   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0829 19:20:05.091263   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0829 19:20:05.091275   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0829 19:20:05.091288   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0829 19:20:05.091301   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0829 19:20:05.091346   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 19:20:05.091373   29935 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 19:20:05.091382   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 19:20:05.091404   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 19:20:05.091427   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:20:05.091448   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 19:20:05.091483   29935 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 19:20:05.091509   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:20:05.091523   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem -> /usr/share/ca-certificates/18361.pem
	I0829 19:20:05.091536   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> /usr/share/ca-certificates/183612.pem
	I0829 19:20:05.091572   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:20:05.094798   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:20:05.095226   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:20:05.095253   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:20:05.095500   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:20:05.095748   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:20:05.095950   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:20:05.096072   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:20:05.174923   29935 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0829 19:20:05.180247   29935 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0829 19:20:05.191966   29935 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0829 19:20:05.196198   29935 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0829 19:20:05.206867   29935 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0829 19:20:05.211046   29935 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0829 19:20:05.223089   29935 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0829 19:20:05.227637   29935 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0829 19:20:05.238270   29935 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0829 19:20:05.242359   29935 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0829 19:20:05.252340   29935 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0829 19:20:05.256330   29935 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0829 19:20:05.270874   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:20:05.295272   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 19:20:05.321764   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:20:05.348478   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:20:05.375369   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0829 19:20:05.400128   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 19:20:05.423436   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:20:05.446550   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:20:05.469821   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:20:05.495816   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 19:20:05.521949   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 19:20:05.546193   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0829 19:20:05.562740   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0829 19:20:05.580138   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0829 19:20:05.596539   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0829 19:20:05.614977   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0829 19:20:05.632443   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0829 19:20:05.649051   29935 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0829 19:20:05.665290   29935 ssh_runner.go:195] Run: openssl version
	I0829 19:20:05.671720   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:20:05.682597   29935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:20:05.687382   29935 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:20:05.687430   29935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:20:05.693220   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:20:05.704194   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 19:20:05.715666   29935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 19:20:05.720399   29935 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 19:20:05.720509   29935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 19:20:05.726123   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 19:20:05.737940   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 19:20:05.749374   29935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 19:20:05.753711   29935 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 19:20:05.753764   29935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 19:20:05.759277   29935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:20:05.775925   29935 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:20:05.781231   29935 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 19:20:05.781290   29935 kubeadm.go:934] updating node {m03 192.168.39.178 8443 v1.31.0 crio true true} ...
	I0829 19:20:05.781368   29935 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-505269-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.178
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-505269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:20:05.781391   29935 kube-vip.go:115] generating kube-vip config ...
	I0829 19:20:05.781422   29935 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0829 19:20:05.799179   29935 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0829 19:20:05.799233   29935 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0829 19:20:05.799281   29935 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:20:05.809020   29935 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0829 19:20:05.809071   29935 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0829 19:20:05.820291   29935 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0829 19:20:05.820323   29935 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0829 19:20:05.820338   29935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:20:05.820346   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0829 19:20:05.820293   29935 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0829 19:20:05.820401   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0829 19:20:05.820428   29935 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0829 19:20:05.820460   29935 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0829 19:20:05.840900   29935 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0829 19:20:05.840951   29935 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0829 19:20:05.840972   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0829 19:20:05.841004   29935 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0829 19:20:05.841032   29935 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0829 19:20:05.841065   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0829 19:20:05.851928   29935 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0829 19:20:05.851958   29935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0829 19:20:06.705142   29935 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0829 19:20:06.714772   29935 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0829 19:20:06.732182   29935 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:20:06.748618   29935 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0829 19:20:06.765652   29935 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0829 19:20:06.769891   29935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:20:06.782268   29935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:20:06.910290   29935 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:20:06.930947   29935 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:20:06.931306   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:20:06.931352   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:20:06.947375   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44309
	I0829 19:20:06.947743   29935 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:20:06.948189   29935 main.go:141] libmachine: Using API Version  1
	I0829 19:20:06.948211   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:20:06.948488   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:20:06.948648   29935 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:20:06.948800   29935 start.go:317] joinCluster: &{Name:ha-505269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-505269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.178 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:20:06.948937   29935 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0829 19:20:06.948953   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:20:06.951940   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:20:06.952441   29935 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:20:06.952468   29935 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:20:06.952639   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:20:06.952802   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:20:06.952945   29935 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:20:06.953064   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:20:07.125275   29935 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.178 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:20:07.125335   29935 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 6c593t.iltwpo34orwpj622 --discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-505269-m03 --control-plane --apiserver-advertise-address=192.168.39.178 --apiserver-bind-port=8443"
	I0829 19:20:28.856230   29935 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 6c593t.iltwpo34orwpj622 --discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-505269-m03 --control-plane --apiserver-advertise-address=192.168.39.178 --apiserver-bind-port=8443": (21.730873438s)
	I0829 19:20:28.856260   29935 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0829 19:20:29.389873   29935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-505269-m03 minikube.k8s.io/updated_at=2024_08_29T19_20_29_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033 minikube.k8s.io/name=ha-505269 minikube.k8s.io/primary=false
	I0829 19:20:29.528574   29935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-505269-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0829 19:20:29.657985   29935 start.go:319] duration metric: took 22.709181661s to joinCluster
	I0829 19:20:29.658072   29935 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.178 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:20:29.658455   29935 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:20:29.659354   29935 out.go:177] * Verifying Kubernetes components...
	I0829 19:20:29.660584   29935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:20:29.929819   29935 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:20:29.980574   29935 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 19:20:29.980908   29935 kapi.go:59] client config for ha-505269: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.crt", KeyFile:"/home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.key", CAFile:"/home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0829 19:20:29.981000   29935 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.56:8443
	I0829 19:20:29.981249   29935 node_ready.go:35] waiting up to 6m0s for node "ha-505269-m03" to be "Ready" ...
	I0829 19:20:29.981331   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:29.981343   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:29.981354   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:29.981364   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:29.984947   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:30.481420   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:30.481443   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:30.481453   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:30.481520   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:30.485150   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:30.982262   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:30.982295   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:30.982317   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:30.982321   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:30.986168   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:31.482355   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:31.482375   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:31.482385   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:31.482390   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:31.485877   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:31.982288   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:31.982311   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:31.982319   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:31.982324   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:31.985761   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:31.986377   29935 node_ready.go:53] node "ha-505269-m03" has status "Ready":"False"
	I0829 19:20:32.482114   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:32.482140   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:32.482151   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:32.482159   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:32.486263   29935 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 19:20:32.982272   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:32.982290   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:32.982298   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:32.982302   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:32.991256   29935 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0829 19:20:33.482440   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:33.482463   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:33.482470   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:33.482473   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:33.485571   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:33.981512   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:33.981537   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:33.981546   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:33.981551   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:33.984753   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:34.481902   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:34.481927   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:34.481939   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:34.481944   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:34.487732   29935 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0829 19:20:34.488532   29935 node_ready.go:53] node "ha-505269-m03" has status "Ready":"False"
	I0829 19:20:34.982434   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:34.982458   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:34.982468   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:34.982475   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:34.986117   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:35.481581   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:35.481606   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:35.481614   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:35.481619   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:35.485197   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:35.982264   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:35.982285   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:35.982293   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:35.982297   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:35.986014   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:36.482100   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:36.482126   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:36.482137   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:36.482144   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:36.485971   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:36.982458   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:36.982481   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:36.982491   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:36.982497   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:36.986178   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:36.986785   29935 node_ready.go:53] node "ha-505269-m03" has status "Ready":"False"
	I0829 19:20:37.481435   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:37.481458   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:37.481466   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:37.481471   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:37.484702   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:37.981802   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:37.981825   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:37.981835   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:37.981842   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:37.984991   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:38.481513   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:38.481535   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:38.481542   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:38.481547   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:38.484964   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:38.982198   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:38.982218   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:38.982226   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:38.982230   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:38.985989   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:39.481877   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:39.481908   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:39.481918   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:39.481924   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:39.486466   29935 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 19:20:39.487121   29935 node_ready.go:53] node "ha-505269-m03" has status "Ready":"False"
	I0829 19:20:39.981578   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:39.981600   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:39.981610   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:39.981616   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:39.985393   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:40.481690   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:40.481716   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:40.481728   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:40.481734   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:40.485374   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:40.981506   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:40.981527   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:40.981534   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:40.981540   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:40.984925   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:41.482093   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:41.482114   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:41.482122   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:41.482126   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:41.485704   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:41.981443   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:41.981467   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:41.981477   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:41.981482   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:41.985369   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:41.986006   29935 node_ready.go:53] node "ha-505269-m03" has status "Ready":"False"
	I0829 19:20:42.481835   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:42.481857   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:42.481866   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:42.481871   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:42.485076   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:42.982232   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:42.982254   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:42.982261   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:42.982265   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:42.985977   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:43.482410   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:43.482432   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:43.482441   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:43.482445   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:43.485535   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:43.981511   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:43.981532   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:43.981540   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:43.981544   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:43.984568   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:44.481554   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:44.481583   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:44.481594   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:44.481602   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:44.485552   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:44.486160   29935 node_ready.go:53] node "ha-505269-m03" has status "Ready":"False"
	I0829 19:20:44.982013   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:44.982040   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:44.982051   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:44.982057   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:44.985727   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:45.481987   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:45.482010   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:45.482021   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:45.482030   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:45.486809   29935 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 19:20:45.981824   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:45.981848   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:45.981858   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:45.981865   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:45.985791   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:46.481866   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:46.481886   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:46.481894   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:46.481897   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:46.484931   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:46.982223   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:46.982247   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:46.982255   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:46.982258   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:46.988590   29935 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0829 19:20:46.989158   29935 node_ready.go:53] node "ha-505269-m03" has status "Ready":"False"
	I0829 19:20:47.481941   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:47.481967   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:47.481978   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:47.481990   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:47.485439   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:47.981786   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:47.981809   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:47.981821   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:47.981827   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:47.985543   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:47.986213   29935 node_ready.go:49] node "ha-505269-m03" has status "Ready":"True"
	I0829 19:20:47.986234   29935 node_ready.go:38] duration metric: took 18.004967564s for node "ha-505269-m03" to be "Ready" ...
	I0829 19:20:47.986244   29935 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:20:47.986317   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods
	I0829 19:20:47.986327   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:47.986334   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:47.986344   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:47.995150   29935 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0829 19:20:48.001222   29935 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-bqqq5" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:48.001293   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-bqqq5
	I0829 19:20:48.001301   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:48.001308   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:48.001315   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:48.004148   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:20:48.005069   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:20:48.005084   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:48.005093   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:48.005097   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:48.009467   29935 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 19:20:48.009934   29935 pod_ready.go:93] pod "coredns-6f6b679f8f-bqqq5" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:48.009951   29935 pod_ready.go:82] duration metric: took 8.70618ms for pod "coredns-6f6b679f8f-bqqq5" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:48.009962   29935 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-qjgfg" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:48.010013   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qjgfg
	I0829 19:20:48.010023   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:48.010033   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:48.010042   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:48.012554   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:20:48.013090   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:20:48.013103   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:48.013112   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:48.013117   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:48.016105   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:20:48.016709   29935 pod_ready.go:93] pod "coredns-6f6b679f8f-qjgfg" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:48.016730   29935 pod_ready.go:82] duration metric: took 6.760466ms for pod "coredns-6f6b679f8f-qjgfg" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:48.016742   29935 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:48.016810   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/etcd-ha-505269
	I0829 19:20:48.016820   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:48.016827   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:48.016830   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:48.019390   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:20:48.019901   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:20:48.019917   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:48.019927   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:48.019932   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:48.022379   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:20:48.022873   29935 pod_ready.go:93] pod "etcd-ha-505269" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:48.022891   29935 pod_ready.go:82] duration metric: took 6.141778ms for pod "etcd-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:48.022902   29935 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:48.022959   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/etcd-ha-505269-m02
	I0829 19:20:48.022970   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:48.022980   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:48.022988   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:48.025320   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:20:48.025822   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:20:48.025835   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:48.025844   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:48.025848   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:48.028278   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:20:48.028886   29935 pod_ready.go:93] pod "etcd-ha-505269-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:48.028903   29935 pod_ready.go:82] duration metric: took 5.990484ms for pod "etcd-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:48.028915   29935 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-505269-m03" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:48.182329   29935 request.go:632] Waited for 153.325482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/etcd-ha-505269-m03
	I0829 19:20:48.182397   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/etcd-ha-505269-m03
	I0829 19:20:48.182404   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:48.182412   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:48.182416   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:48.185893   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:48.381959   29935 request.go:632] Waited for 195.278024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:48.382035   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:48.382043   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:48.382054   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:48.382063   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:48.385227   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:48.385761   29935 pod_ready.go:93] pod "etcd-ha-505269-m03" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:48.385777   29935 pod_ready.go:82] duration metric: took 356.852127ms for pod "etcd-ha-505269-m03" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:48.385793   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:48.581962   29935 request.go:632] Waited for 196.112994ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-505269
	I0829 19:20:48.582027   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-505269
	I0829 19:20:48.582035   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:48.582045   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:48.582050   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:48.585745   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:48.782792   29935 request.go:632] Waited for 196.38951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:20:48.782865   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:20:48.782874   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:48.782883   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:48.782888   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:48.786830   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:48.787392   29935 pod_ready.go:93] pod "kube-apiserver-ha-505269" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:48.787418   29935 pod_ready.go:82] duration metric: took 401.617326ms for pod "kube-apiserver-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:48.787431   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:48.982370   29935 request.go:632] Waited for 194.872484ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-505269-m02
	I0829 19:20:48.982439   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-505269-m02
	I0829 19:20:48.982445   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:48.982452   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:48.982456   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:48.985952   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:49.181994   29935 request.go:632] Waited for 195.292396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:20:49.182057   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:20:49.182063   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:49.182073   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:49.182079   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:49.185734   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:49.186279   29935 pod_ready.go:93] pod "kube-apiserver-ha-505269-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:49.186301   29935 pod_ready.go:82] duration metric: took 398.861133ms for pod "kube-apiserver-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:49.186316   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-505269-m03" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:49.382384   29935 request.go:632] Waited for 196.001794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-505269-m03
	I0829 19:20:49.382463   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-505269-m03
	I0829 19:20:49.382470   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:49.382480   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:49.382490   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:49.385360   29935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 19:20:49.582503   29935 request.go:632] Waited for 196.233778ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:49.582587   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:49.582596   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:49.582602   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:49.582608   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:49.586046   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:49.586618   29935 pod_ready.go:93] pod "kube-apiserver-ha-505269-m03" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:49.586640   29935 pod_ready.go:82] duration metric: took 400.317248ms for pod "kube-apiserver-ha-505269-m03" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:49.586652   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:49.782739   29935 request.go:632] Waited for 196.011295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-505269
	I0829 19:20:49.782792   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-505269
	I0829 19:20:49.782798   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:49.782806   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:49.782811   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:49.786083   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:49.982229   29935 request.go:632] Waited for 195.413692ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:20:49.982288   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:20:49.982295   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:49.982309   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:49.982324   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:49.985812   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:49.986374   29935 pod_ready.go:93] pod "kube-controller-manager-ha-505269" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:49.986391   29935 pod_ready.go:82] duration metric: took 399.731157ms for pod "kube-controller-manager-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:49.986401   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:50.182741   29935 request.go:632] Waited for 196.282501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-505269-m02
	I0829 19:20:50.182807   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-505269-m02
	I0829 19:20:50.182815   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:50.182826   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:50.182834   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:50.186455   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:50.382576   29935 request.go:632] Waited for 195.348282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:20:50.382664   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:20:50.382677   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:50.382686   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:50.382699   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:50.387644   29935 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 19:20:50.388308   29935 pod_ready.go:93] pod "kube-controller-manager-ha-505269-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:50.388330   29935 pod_ready.go:82] duration metric: took 401.922653ms for pod "kube-controller-manager-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:50.388339   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-505269-m03" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:50.582413   29935 request.go:632] Waited for 194.012728ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-505269-m03
	I0829 19:20:50.582507   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-505269-m03
	I0829 19:20:50.582516   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:50.582524   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:50.582530   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:50.586210   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:50.782734   29935 request.go:632] Waited for 195.349333ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:50.782793   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:50.782801   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:50.782810   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:50.782820   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:50.786170   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:50.787164   29935 pod_ready.go:93] pod "kube-controller-manager-ha-505269-m03" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:50.787188   29935 pod_ready.go:82] duration metric: took 398.842979ms for pod "kube-controller-manager-ha-505269-m03" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:50.787199   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hx822" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:50.982280   29935 request.go:632] Waited for 195.006936ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hx822
	I0829 19:20:50.982333   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hx822
	I0829 19:20:50.982339   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:50.982347   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:50.982351   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:50.985894   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:51.182836   29935 request.go:632] Waited for 196.376293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:20:51.182899   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:20:51.182904   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:51.182911   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:51.182915   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:51.186003   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:51.186628   29935 pod_ready.go:93] pod "kube-proxy-hx822" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:51.186650   29935 pod_ready.go:82] duration metric: took 399.442284ms for pod "kube-proxy-hx822" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:51.186663   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jxbdt" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:51.382672   29935 request.go:632] Waited for 195.919961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jxbdt
	I0829 19:20:51.382733   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jxbdt
	I0829 19:20:51.382738   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:51.382747   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:51.382751   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:51.385879   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:51.581977   29935 request.go:632] Waited for 195.27634ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:20:51.582034   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:20:51.582041   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:51.582049   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:51.582055   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:51.585400   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:51.586012   29935 pod_ready.go:93] pod "kube-proxy-jxbdt" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:51.586033   29935 pod_ready.go:82] duration metric: took 399.362235ms for pod "kube-proxy-jxbdt" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:51.586046   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s6zxk" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:51.782211   29935 request.go:632] Waited for 196.09594ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s6zxk
	I0829 19:20:51.782268   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s6zxk
	I0829 19:20:51.782274   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:51.782282   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:51.782288   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:51.786430   29935 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 19:20:51.982473   29935 request.go:632] Waited for 195.29556ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:51.982549   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:51.982560   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:51.982574   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:51.982584   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:51.985805   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:51.986410   29935 pod_ready.go:93] pod "kube-proxy-s6zxk" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:51.986428   29935 pod_ready.go:82] duration metric: took 400.375683ms for pod "kube-proxy-s6zxk" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:51.986437   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:52.182484   29935 request.go:632] Waited for 195.979328ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-505269
	I0829 19:20:52.182549   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-505269
	I0829 19:20:52.182556   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:52.182566   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:52.182574   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:52.185900   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:52.381961   29935 request.go:632] Waited for 195.503299ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:20:52.382039   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269
	I0829 19:20:52.382050   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:52.382061   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:52.382070   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:52.385244   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:52.385651   29935 pod_ready.go:93] pod "kube-scheduler-ha-505269" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:52.385669   29935 pod_ready.go:82] duration metric: took 399.226177ms for pod "kube-scheduler-ha-505269" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:52.385678   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:52.582815   29935 request.go:632] Waited for 197.051311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-505269-m02
	I0829 19:20:52.582890   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-505269-m02
	I0829 19:20:52.582898   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:52.582908   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:52.582927   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:52.586478   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:52.782093   29935 request.go:632] Waited for 194.956257ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:20:52.782155   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m02
	I0829 19:20:52.782175   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:52.782188   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:52.782192   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:52.785743   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:52.786454   29935 pod_ready.go:93] pod "kube-scheduler-ha-505269-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:52.786475   29935 pod_ready.go:82] duration metric: took 400.790166ms for pod "kube-scheduler-ha-505269-m02" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:52.786488   29935 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-505269-m03" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:52.982701   29935 request.go:632] Waited for 196.131453ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-505269-m03
	I0829 19:20:52.982775   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-505269-m03
	I0829 19:20:52.982786   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:52.982797   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:52.982807   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:52.986720   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:53.182841   29935 request.go:632] Waited for 195.463735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:53.182893   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes/ha-505269-m03
	I0829 19:20:53.182906   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:53.182918   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:53.182931   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:53.186220   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:53.186854   29935 pod_ready.go:93] pod "kube-scheduler-ha-505269-m03" in "kube-system" namespace has status "Ready":"True"
	I0829 19:20:53.186873   29935 pod_ready.go:82] duration metric: took 400.378159ms for pod "kube-scheduler-ha-505269-m03" in "kube-system" namespace to be "Ready" ...
	I0829 19:20:53.186887   29935 pod_ready.go:39] duration metric: took 5.200628119s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:20:53.186903   29935 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:20:53.186949   29935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:20:53.202916   29935 api_server.go:72] duration metric: took 23.544810705s to wait for apiserver process to appear ...
	I0829 19:20:53.202938   29935 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:20:53.202954   29935 api_server.go:253] Checking apiserver healthz at https://192.168.39.56:8443/healthz ...
	I0829 19:20:53.207128   29935 api_server.go:279] https://192.168.39.56:8443/healthz returned 200:
	ok
	I0829 19:20:53.207200   29935 round_trippers.go:463] GET https://192.168.39.56:8443/version
	I0829 19:20:53.207211   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:53.207219   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:53.207222   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:53.208011   29935 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0829 19:20:53.208081   29935 api_server.go:141] control plane version: v1.31.0
	I0829 19:20:53.208098   29935 api_server.go:131] duration metric: took 5.15509ms to wait for apiserver health ...
	I0829 19:20:53.208107   29935 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:20:53.382562   29935 request.go:632] Waited for 174.349701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods
	I0829 19:20:53.382626   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods
	I0829 19:20:53.382642   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:53.382653   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:53.382660   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:53.390094   29935 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0829 19:20:53.397222   29935 system_pods.go:59] 24 kube-system pods found
	I0829 19:20:53.397255   29935 system_pods.go:61] "coredns-6f6b679f8f-bqqq5" [801d9cfa-e1ad-4b31-9803-0030543fdc9e] Running
	I0829 19:20:53.397262   29935 system_pods.go:61] "coredns-6f6b679f8f-qjgfg" [12168097-2d3c-467a-b4b5-c0ca7f85e4eb] Running
	I0829 19:20:53.397268   29935 system_pods.go:61] "etcd-ha-505269" [a9cd644c-66f8-419a-be0c-615fc97daf18] Running
	I0829 19:20:53.397274   29935 system_pods.go:61] "etcd-ha-505269-m02" [864d2e94-62a9-4171-87bc-7ec5a3fc6224] Running
	I0829 19:20:53.397279   29935 system_pods.go:61] "etcd-ha-505269-m03" [33af63b3-671f-4404-8499-aea05889ba77] Running
	I0829 19:20:53.397285   29935 system_pods.go:61] "kindnet-7rp6z" [7c922b32-e666-4b00-ab65-505632346112] Running
	I0829 19:20:53.397290   29935 system_pods.go:61] "kindnet-lr2lx" [f12a48e6-faf1-43ea-93bb-21d6526ccd5a] Running
	I0829 19:20:53.397295   29935 system_pods.go:61] "kindnet-sthc8" [3c5a7487-a1b8-4acc-9462-84a2b478f46b] Running
	I0829 19:20:53.397301   29935 system_pods.go:61] "kube-apiserver-ha-505269" [616e3cf5-709a-46a8-8d71-0e709d297ca0] Running
	I0829 19:20:53.397309   29935 system_pods.go:61] "kube-apiserver-ha-505269-m02" [8615f4df-4f47-451a-80c8-d50826a75738] Running
	I0829 19:20:53.397313   29935 system_pods.go:61] "kube-apiserver-ha-505269-m03" [96e976ac-3560-4c87-a5f4-9841ada7162a] Running
	I0829 19:20:53.397320   29935 system_pods.go:61] "kube-controller-manager-ha-505269" [3f81751f-e12f-4a70-a901-db586a66461e] Running
	I0829 19:20:53.397324   29935 system_pods.go:61] "kube-controller-manager-ha-505269-m02" [b0587260-4827-47eb-a3b7-afb5b1fad59b] Running
	I0829 19:20:53.397331   29935 system_pods.go:61] "kube-controller-manager-ha-505269-m03" [ab1975ca-707e-4ac8-9a7e-81f1564b947c] Running
	I0829 19:20:53.397335   29935 system_pods.go:61] "kube-proxy-hx822" [e88a504e-122b-4609-a0cc-4ad3115b3e4e] Running
	I0829 19:20:53.397345   29935 system_pods.go:61] "kube-proxy-jxbdt" [e51729e9-d662-4ea2-9a4f-85f77b269dea] Running
	I0829 19:20:53.397348   29935 system_pods.go:61] "kube-proxy-s6zxk" [77cd7837-5ad2-4775-b909-ea68c0315299] Running
	I0829 19:20:53.397351   29935 system_pods.go:61] "kube-scheduler-ha-505269" [c573cfd8-20ba-46ce-8c0f-b610240ab78d] Running
	I0829 19:20:53.397355   29935 system_pods.go:61] "kube-scheduler-ha-505269-m02" [ba4e7eec-baaa-4c92-84f2-ac50629fea20] Running
	I0829 19:20:53.397358   29935 system_pods.go:61] "kube-scheduler-ha-505269-m03" [1e2254c2-3a7d-42bc-a9ad-669bf55ede4e] Running
	I0829 19:20:53.397361   29935 system_pods.go:61] "kube-vip-ha-505269" [d1734801-9573-45b3-a4a0-9ac45c093b95] Running
	I0829 19:20:53.397364   29935 system_pods.go:61] "kube-vip-ha-505269-m02" [f33d8dab-fb6f-46cf-b508-1e0eae03cad2] Running
	I0829 19:20:53.397367   29935 system_pods.go:61] "kube-vip-ha-505269-m03" [dfc5cf61-552b-42c7-87a1-b311d4dd57b1] Running
	I0829 19:20:53.397369   29935 system_pods.go:61] "storage-provisioner" [6b7cd00a-94da-4e42-b7ae-289aab759c4f] Running
	I0829 19:20:53.397375   29935 system_pods.go:74] duration metric: took 189.259337ms to wait for pod list to return data ...
	I0829 19:20:53.397384   29935 default_sa.go:34] waiting for default service account to be created ...
	I0829 19:20:53.582809   29935 request.go:632] Waited for 185.349391ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/default/serviceaccounts
	I0829 19:20:53.582877   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/default/serviceaccounts
	I0829 19:20:53.582885   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:53.582897   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:53.582908   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:53.586282   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:53.586394   29935 default_sa.go:45] found service account: "default"
	I0829 19:20:53.586411   29935 default_sa.go:55] duration metric: took 189.019647ms for default service account to be created ...
	I0829 19:20:53.586423   29935 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 19:20:53.782598   29935 request.go:632] Waited for 196.100839ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods
	I0829 19:20:53.782654   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/namespaces/kube-system/pods
	I0829 19:20:53.782659   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:53.782666   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:53.782670   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:53.787194   29935 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 19:20:53.793542   29935 system_pods.go:86] 24 kube-system pods found
	I0829 19:20:53.793567   29935 system_pods.go:89] "coredns-6f6b679f8f-bqqq5" [801d9cfa-e1ad-4b31-9803-0030543fdc9e] Running
	I0829 19:20:53.793572   29935 system_pods.go:89] "coredns-6f6b679f8f-qjgfg" [12168097-2d3c-467a-b4b5-c0ca7f85e4eb] Running
	I0829 19:20:53.793576   29935 system_pods.go:89] "etcd-ha-505269" [a9cd644c-66f8-419a-be0c-615fc97daf18] Running
	I0829 19:20:53.793580   29935 system_pods.go:89] "etcd-ha-505269-m02" [864d2e94-62a9-4171-87bc-7ec5a3fc6224] Running
	I0829 19:20:53.793584   29935 system_pods.go:89] "etcd-ha-505269-m03" [33af63b3-671f-4404-8499-aea05889ba77] Running
	I0829 19:20:53.793587   29935 system_pods.go:89] "kindnet-7rp6z" [7c922b32-e666-4b00-ab65-505632346112] Running
	I0829 19:20:53.793590   29935 system_pods.go:89] "kindnet-lr2lx" [f12a48e6-faf1-43ea-93bb-21d6526ccd5a] Running
	I0829 19:20:53.793594   29935 system_pods.go:89] "kindnet-sthc8" [3c5a7487-a1b8-4acc-9462-84a2b478f46b] Running
	I0829 19:20:53.793597   29935 system_pods.go:89] "kube-apiserver-ha-505269" [616e3cf5-709a-46a8-8d71-0e709d297ca0] Running
	I0829 19:20:53.793601   29935 system_pods.go:89] "kube-apiserver-ha-505269-m02" [8615f4df-4f47-451a-80c8-d50826a75738] Running
	I0829 19:20:53.793604   29935 system_pods.go:89] "kube-apiserver-ha-505269-m03" [96e976ac-3560-4c87-a5f4-9841ada7162a] Running
	I0829 19:20:53.793607   29935 system_pods.go:89] "kube-controller-manager-ha-505269" [3f81751f-e12f-4a70-a901-db586a66461e] Running
	I0829 19:20:53.793610   29935 system_pods.go:89] "kube-controller-manager-ha-505269-m02" [b0587260-4827-47eb-a3b7-afb5b1fad59b] Running
	I0829 19:20:53.793615   29935 system_pods.go:89] "kube-controller-manager-ha-505269-m03" [ab1975ca-707e-4ac8-9a7e-81f1564b947c] Running
	I0829 19:20:53.793619   29935 system_pods.go:89] "kube-proxy-hx822" [e88a504e-122b-4609-a0cc-4ad3115b3e4e] Running
	I0829 19:20:53.793623   29935 system_pods.go:89] "kube-proxy-jxbdt" [e51729e9-d662-4ea2-9a4f-85f77b269dea] Running
	I0829 19:20:53.793629   29935 system_pods.go:89] "kube-proxy-s6zxk" [77cd7837-5ad2-4775-b909-ea68c0315299] Running
	I0829 19:20:53.793632   29935 system_pods.go:89] "kube-scheduler-ha-505269" [c573cfd8-20ba-46ce-8c0f-b610240ab78d] Running
	I0829 19:20:53.793636   29935 system_pods.go:89] "kube-scheduler-ha-505269-m02" [ba4e7eec-baaa-4c92-84f2-ac50629fea20] Running
	I0829 19:20:53.793639   29935 system_pods.go:89] "kube-scheduler-ha-505269-m03" [1e2254c2-3a7d-42bc-a9ad-669bf55ede4e] Running
	I0829 19:20:53.793645   29935 system_pods.go:89] "kube-vip-ha-505269" [d1734801-9573-45b3-a4a0-9ac45c093b95] Running
	I0829 19:20:53.793649   29935 system_pods.go:89] "kube-vip-ha-505269-m02" [f33d8dab-fb6f-46cf-b508-1e0eae03cad2] Running
	I0829 19:20:53.793657   29935 system_pods.go:89] "kube-vip-ha-505269-m03" [dfc5cf61-552b-42c7-87a1-b311d4dd57b1] Running
	I0829 19:20:53.793662   29935 system_pods.go:89] "storage-provisioner" [6b7cd00a-94da-4e42-b7ae-289aab759c4f] Running
	I0829 19:20:53.793671   29935 system_pods.go:126] duration metric: took 207.240387ms to wait for k8s-apps to be running ...
	I0829 19:20:53.793680   29935 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 19:20:53.793721   29935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:20:53.808978   29935 system_svc.go:56] duration metric: took 15.288575ms WaitForService to wait for kubelet
	I0829 19:20:53.809005   29935 kubeadm.go:582] duration metric: took 24.150901223s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:20:53.809022   29935 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:20:53.982419   29935 request.go:632] Waited for 173.320157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.56:8443/api/v1/nodes
	I0829 19:20:53.982498   29935 round_trippers.go:463] GET https://192.168.39.56:8443/api/v1/nodes
	I0829 19:20:53.982504   29935 round_trippers.go:469] Request Headers:
	I0829 19:20:53.982512   29935 round_trippers.go:473]     Accept: application/json, */*
	I0829 19:20:53.982517   29935 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 19:20:53.986093   29935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 19:20:53.988888   29935 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:20:53.988913   29935 node_conditions.go:123] node cpu capacity is 2
	I0829 19:20:53.988924   29935 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:20:53.988928   29935 node_conditions.go:123] node cpu capacity is 2
	I0829 19:20:53.988931   29935 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:20:53.988934   29935 node_conditions.go:123] node cpu capacity is 2
	I0829 19:20:53.988938   29935 node_conditions.go:105] duration metric: took 179.911843ms to run NodePressure ...
	I0829 19:20:53.988948   29935 start.go:241] waiting for startup goroutines ...
	I0829 19:20:53.988965   29935 start.go:255] writing updated cluster config ...
	I0829 19:20:53.989220   29935 ssh_runner.go:195] Run: rm -f paused
	I0829 19:20:54.039742   29935 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 19:20:54.041943   29935 out.go:177] * Done! kubectl is now configured to use "ha-505269" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 29 19:25:30 ha-505269 crio[665]: time="2024-08-29 19:25:30.350774674Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:02692297ba9f1af58d950a27f62a82af78a2a057750241400caef23a8bf2b2b4,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-psss7,Uid:69c11597-6cac-437a-9860-fc1a66cdc304,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724959255269093474,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-psss7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69c11597-6cac-437a-9860-fc1a66cdc304,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:20:54.949336771Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2c5d2aad5519947556e7e1a184c260499dd950c4ebd176a2189e8fc06fa32cfa,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:6b7cd00a-94da-4e42-b7ae-289aab759c4f,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1724959111789428855,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7cd00a-94da-4e42-b7ae-289aab759c4f,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-29T19:18:31.473634295Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f43b8211e2c7357fd7c4a4db43182c72f928a273075ca6fe8b80aedc84c67fac,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-qjgfg,Uid:12168097-2d3c-467a-b4b5-c0ca7f85e4eb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724959111782335464,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-qjgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12168097-2d3c-467a-b4b5-c0ca7f85e4eb,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:18:31.471036509Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6b5276c7cbe294143915bdc30d76458310de02fd916991c7782efc2e33f3190b,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-bqqq5,Uid:801d9cfa-e1ad-4b31-9803-0030543fdc9e,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1724959111772398657,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-bqqq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801d9cfa-e1ad-4b31-9803-0030543fdc9e,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:18:31.464611802Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:303aabbca3328ef64356c8322b3e76f6f3d1d35af5f892a4bec46277b7c9dd3f,Metadata:&PodSandboxMetadata{Name:kube-proxy-hx822,Uid:e88a504e-122b-4609-a0cc-4ad3115b3e4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724959097198560581,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-hx822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88a504e-122b-4609-a0cc-4ad3115b3e4e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-08-29T19:18:16.877349396Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1f6e4f500f959ede186c4a9f85bff38a45872b813a20d08db40395e528d840a1,Metadata:&PodSandboxMetadata{Name:kindnet-7rp6z,Uid:7c922b32-e666-4b00-ab65-505632346112,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724959097191816001,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-7rp6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c922b32-e666-4b00-ab65-505632346112,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:18:16.875685703Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0e21f73ac8e2264b4a929fcd33f71b1ffda374ae03b0bd600fb8c7a44c8bef74,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-505269,Uid:d82b78fe84e206c02eb995f9d886b23c,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1724959085483628539,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82b78fe84e206c02eb995f9d886b23c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.56:8443,kubernetes.io/config.hash: d82b78fe84e206c02eb995f9d886b23c,kubernetes.io/config.seen: 2024-08-29T19:18:04.402568147Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9d45b7206f46e64c0d5913c20a97148e0ac19d155b7ce7d3e371c593e763c4d7,Metadata:&PodSandboxMetadata{Name:etcd-ha-505269,Uid:2658d81c7919220d900309ffd29970c4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724959085474793507,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2658
d81c7919220d900309ffd29970c4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.56:2379,kubernetes.io/config.hash: 2658d81c7919220d900309ffd29970c4,kubernetes.io/config.seen: 2024-08-29T19:18:04.402563696Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8cfa0a246feb47d168af1872d1544f0b93affe0730c14516b4367b835d78d328,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-505269,Uid:ceba94f2170a08ee5a3d92beb3c9ffca,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724959085464800201,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceba94f2170a08ee5a3d92beb3c9ffca,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ceba94f2170a08ee5a3d92beb3c9ffca,kubernetes.io/config.seen: 2024-08-29T19:18:04.402570367Z,kubernetes.io/config.source: file
,},RuntimeHandler:,},&PodSandbox{Id:b5a438045598a2d267d39e01f1e41df597c13d1ad48368887838f09efcda52ac,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-505269,Uid:aa47351bb1c351808ace9dd407df7743,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724959085460961550,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa47351bb1c351808ace9dd407df7743,},Annotations:map[string]string{kubernetes.io/config.hash: aa47351bb1c351808ace9dd407df7743,kubernetes.io/config.seen: 2024-08-29T19:18:04.402571045Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ac333ce918ddeec2a3499825104982d2e3230fd22b85b6e99bace076fdf6e1dd,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-505269,Uid:840e9d9d59afee1514ac6551d154c955,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724959085454295081,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.con
tainer.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840e9d9d59afee1514ac6551d154c955,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 840e9d9d59afee1514ac6551d154c955,kubernetes.io/config.seen: 2024-08-29T19:18:04.402569371Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=47e181f7-7411-40c1-b48a-15af3b15261e name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 29 19:25:30 ha-505269 crio[665]: time="2024-08-29 19:25:30.351511795Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d6b17ab-2e50-41cf-8d20-0fc8e30a3fd2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:25:30 ha-505269 crio[665]: time="2024-08-29 19:25:30.351577654Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d6b17ab-2e50-41cf-8d20-0fc8e30a3fd2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:25:30 ha-505269 crio[665]: time="2024-08-29 19:25:30.351850030Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ed600112468d9762d0ad5c0e554ca486c5cfc592114271d25cd84f5c09187db,PodSandboxId:02692297ba9f1af58d950a27f62a82af78a2a057750241400caef23a8bf2b2b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724959256520731552,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-psss7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69c11597-6cac-437a-9860-fc1a66cdc304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29d7e6c72fdaaad5da65548ec44c18b60db3921c4ad2b63c9d767f1cc2c3fa75,PodSandboxId:f43b8211e2c7357fd7c4a4db43182c72f928a273075ca6fe8b80aedc84c67fac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959112076509057,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qjgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12168097-2d3c-467a-b4b5-c0ca7f85e4eb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc1a33f68ce7f972b7d1dd4dc36ce496d6339b1dc6a351d35cbb255ed61e8bd,PodSandboxId:6b5276c7cbe294143915bdc30d76458310de02fd916991c7782efc2e33f3190b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959112071160839,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bqqq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
801d9cfa-e1ad-4b31-9803-0030543fdc9e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccbac4cefd2d1e052f3d856dea541761ee436725097d33698eed254a59c810fe,PodSandboxId:2c5d2aad5519947556e7e1a184c260499dd950c4ebd176a2189e8fc06fa32cfa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724959111958933793,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7cd00a-94da-4e42-b7ae-289aab759c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5e9dd792be09a6e1909afd5dca42b303bac8476ef04f3254480b0f21ac53604,PodSandboxId:1f6e4f500f959ede186c4a9f85bff38a45872b813a20d08db40395e528d840a1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724959100077619706,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7rp6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c922b32-e666-4b00-ab65-505632346112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0cc96d9477c950d00d6920b1c4466bfd2e75dc0980383cdf073b75f577c37c,PodSandboxId:303aabbca3328ef64356c8322b3e76f6f3d1d35af5f892a4bec46277b7c9dd3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172495909
7303115391,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88a504e-122b-4609-a0cc-4ad3115b3e4e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:066b1cbdd3861a8cf28333b764086e47ace1987acec778ff2b2d0aa1973af37a,PodSandboxId:b5a438045598a2d267d39e01f1e41df597c13d1ad48368887838f09efcda52ac,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172495908828
3659292,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa47351bb1c351808ace9dd407df7743,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52fd2d668a925e55c334400b5d61dde55232d771983a9073e97f0ff37a6062fc,PodSandboxId:9d45b7206f46e64c0d5913c20a97148e0ac19d155b7ce7d3e371c593e763c4d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724959085808486991,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2658d81c7919220d900309ffd29970c4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:960e616b3c0582c8ccb0ec4fea8d02c0e2bfd2d0f81a273ac203cbf5eb6e4d6c,PodSandboxId:ac333ce918ddeec2a3499825104982d2e3230fd22b85b6e99bace076fdf6e1dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724959085742080078,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840e9d9d59afee1514ac6551d154c955,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1f91ce133bedd34081facf255eed45036eacfd938fbf58545d302a01d638dc0,PodSandboxId:8cfa0a246feb47d168af1872d1544f0b93affe0730c14516b4367b835d78d328,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724959085672946345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,
io.kubernetes.pod.name: kube-scheduler-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceba94f2170a08ee5a3d92beb3c9ffca,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b2531e5990db96ea59b71af6b9036995e91f182477ba5d381b96877d38e296,PodSandboxId:0e21f73ac8e2264b4a929fcd33f71b1ffda374ae03b0bd600fb8c7a44c8bef74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724959085658194261,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82b78fe84e206c02eb995f9d886b23c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d6b17ab-2e50-41cf-8d20-0fc8e30a3fd2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:25:30 ha-505269 crio[665]: time="2024-08-29 19:25:30.387265669Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6707ebc1-a61e-4c92-b2a8-8b09cab7fec6 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:25:30 ha-505269 crio[665]: time="2024-08-29 19:25:30.387356979Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6707ebc1-a61e-4c92-b2a8-8b09cab7fec6 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:25:30 ha-505269 crio[665]: time="2024-08-29 19:25:30.388140326Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3a6c395a-5632-44a7-8b02-6fe9069f0f65 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:25:30 ha-505269 crio[665]: time="2024-08-29 19:25:30.388678214Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959530388652858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a6c395a-5632-44a7-8b02-6fe9069f0f65 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:25:30 ha-505269 crio[665]: time="2024-08-29 19:25:30.389315727Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bdee8b8f-5f86-4099-a71b-75583c93ed40 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:25:30 ha-505269 crio[665]: time="2024-08-29 19:25:30.389390433Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bdee8b8f-5f86-4099-a71b-75583c93ed40 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:25:30 ha-505269 crio[665]: time="2024-08-29 19:25:30.389614977Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ed600112468d9762d0ad5c0e554ca486c5cfc592114271d25cd84f5c09187db,PodSandboxId:02692297ba9f1af58d950a27f62a82af78a2a057750241400caef23a8bf2b2b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724959256520731552,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-psss7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69c11597-6cac-437a-9860-fc1a66cdc304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29d7e6c72fdaaad5da65548ec44c18b60db3921c4ad2b63c9d767f1cc2c3fa75,PodSandboxId:f43b8211e2c7357fd7c4a4db43182c72f928a273075ca6fe8b80aedc84c67fac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959112076509057,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qjgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12168097-2d3c-467a-b4b5-c0ca7f85e4eb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc1a33f68ce7f972b7d1dd4dc36ce496d6339b1dc6a351d35cbb255ed61e8bd,PodSandboxId:6b5276c7cbe294143915bdc30d76458310de02fd916991c7782efc2e33f3190b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959112071160839,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bqqq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
801d9cfa-e1ad-4b31-9803-0030543fdc9e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccbac4cefd2d1e052f3d856dea541761ee436725097d33698eed254a59c810fe,PodSandboxId:2c5d2aad5519947556e7e1a184c260499dd950c4ebd176a2189e8fc06fa32cfa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724959111958933793,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7cd00a-94da-4e42-b7ae-289aab759c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5e9dd792be09a6e1909afd5dca42b303bac8476ef04f3254480b0f21ac53604,PodSandboxId:1f6e4f500f959ede186c4a9f85bff38a45872b813a20d08db40395e528d840a1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724959100077619706,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7rp6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c922b32-e666-4b00-ab65-505632346112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0cc96d9477c950d00d6920b1c4466bfd2e75dc0980383cdf073b75f577c37c,PodSandboxId:303aabbca3328ef64356c8322b3e76f6f3d1d35af5f892a4bec46277b7c9dd3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172495909
7303115391,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88a504e-122b-4609-a0cc-4ad3115b3e4e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:066b1cbdd3861a8cf28333b764086e47ace1987acec778ff2b2d0aa1973af37a,PodSandboxId:b5a438045598a2d267d39e01f1e41df597c13d1ad48368887838f09efcda52ac,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172495908828
3659292,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa47351bb1c351808ace9dd407df7743,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52fd2d668a925e55c334400b5d61dde55232d771983a9073e97f0ff37a6062fc,PodSandboxId:9d45b7206f46e64c0d5913c20a97148e0ac19d155b7ce7d3e371c593e763c4d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724959085808486991,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2658d81c7919220d900309ffd29970c4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:960e616b3c0582c8ccb0ec4fea8d02c0e2bfd2d0f81a273ac203cbf5eb6e4d6c,PodSandboxId:ac333ce918ddeec2a3499825104982d2e3230fd22b85b6e99bace076fdf6e1dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724959085742080078,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840e9d9d59afee1514ac6551d154c955,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1f91ce133bedd34081facf255eed45036eacfd938fbf58545d302a01d638dc0,PodSandboxId:8cfa0a246feb47d168af1872d1544f0b93affe0730c14516b4367b835d78d328,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724959085672946345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,
io.kubernetes.pod.name: kube-scheduler-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceba94f2170a08ee5a3d92beb3c9ffca,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b2531e5990db96ea59b71af6b9036995e91f182477ba5d381b96877d38e296,PodSandboxId:0e21f73ac8e2264b4a929fcd33f71b1ffda374ae03b0bd600fb8c7a44c8bef74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724959085658194261,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82b78fe84e206c02eb995f9d886b23c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bdee8b8f-5f86-4099-a71b-75583c93ed40 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:25:30 ha-505269 crio[665]: time="2024-08-29 19:25:30.432213914Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e3543fbf-27cd-453f-a470-c3fec909a4f7 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:25:30 ha-505269 crio[665]: time="2024-08-29 19:25:30.432328949Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e3543fbf-27cd-453f-a470-c3fec909a4f7 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:25:30 ha-505269 crio[665]: time="2024-08-29 19:25:30.433917977Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b5aa3a53-a057-4ee4-8a7d-29e8327cca36 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:25:30 ha-505269 crio[665]: time="2024-08-29 19:25:30.434666019Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959530434634210,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b5aa3a53-a057-4ee4-8a7d-29e8327cca36 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:25:30 ha-505269 crio[665]: time="2024-08-29 19:25:30.435514432Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fba634ef-3847-408a-84a1-9864124c57af name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:25:30 ha-505269 crio[665]: time="2024-08-29 19:25:30.435602622Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fba634ef-3847-408a-84a1-9864124c57af name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:25:30 ha-505269 crio[665]: time="2024-08-29 19:25:30.435946483Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ed600112468d9762d0ad5c0e554ca486c5cfc592114271d25cd84f5c09187db,PodSandboxId:02692297ba9f1af58d950a27f62a82af78a2a057750241400caef23a8bf2b2b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724959256520731552,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-psss7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69c11597-6cac-437a-9860-fc1a66cdc304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29d7e6c72fdaaad5da65548ec44c18b60db3921c4ad2b63c9d767f1cc2c3fa75,PodSandboxId:f43b8211e2c7357fd7c4a4db43182c72f928a273075ca6fe8b80aedc84c67fac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959112076509057,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qjgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12168097-2d3c-467a-b4b5-c0ca7f85e4eb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc1a33f68ce7f972b7d1dd4dc36ce496d6339b1dc6a351d35cbb255ed61e8bd,PodSandboxId:6b5276c7cbe294143915bdc30d76458310de02fd916991c7782efc2e33f3190b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959112071160839,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bqqq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
801d9cfa-e1ad-4b31-9803-0030543fdc9e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccbac4cefd2d1e052f3d856dea541761ee436725097d33698eed254a59c810fe,PodSandboxId:2c5d2aad5519947556e7e1a184c260499dd950c4ebd176a2189e8fc06fa32cfa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724959111958933793,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7cd00a-94da-4e42-b7ae-289aab759c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5e9dd792be09a6e1909afd5dca42b303bac8476ef04f3254480b0f21ac53604,PodSandboxId:1f6e4f500f959ede186c4a9f85bff38a45872b813a20d08db40395e528d840a1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724959100077619706,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7rp6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c922b32-e666-4b00-ab65-505632346112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0cc96d9477c950d00d6920b1c4466bfd2e75dc0980383cdf073b75f577c37c,PodSandboxId:303aabbca3328ef64356c8322b3e76f6f3d1d35af5f892a4bec46277b7c9dd3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172495909
7303115391,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88a504e-122b-4609-a0cc-4ad3115b3e4e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:066b1cbdd3861a8cf28333b764086e47ace1987acec778ff2b2d0aa1973af37a,PodSandboxId:b5a438045598a2d267d39e01f1e41df597c13d1ad48368887838f09efcda52ac,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172495908828
3659292,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa47351bb1c351808ace9dd407df7743,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52fd2d668a925e55c334400b5d61dde55232d771983a9073e97f0ff37a6062fc,PodSandboxId:9d45b7206f46e64c0d5913c20a97148e0ac19d155b7ce7d3e371c593e763c4d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724959085808486991,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2658d81c7919220d900309ffd29970c4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:960e616b3c0582c8ccb0ec4fea8d02c0e2bfd2d0f81a273ac203cbf5eb6e4d6c,PodSandboxId:ac333ce918ddeec2a3499825104982d2e3230fd22b85b6e99bace076fdf6e1dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724959085742080078,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840e9d9d59afee1514ac6551d154c955,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1f91ce133bedd34081facf255eed45036eacfd938fbf58545d302a01d638dc0,PodSandboxId:8cfa0a246feb47d168af1872d1544f0b93affe0730c14516b4367b835d78d328,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724959085672946345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,
io.kubernetes.pod.name: kube-scheduler-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceba94f2170a08ee5a3d92beb3c9ffca,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b2531e5990db96ea59b71af6b9036995e91f182477ba5d381b96877d38e296,PodSandboxId:0e21f73ac8e2264b4a929fcd33f71b1ffda374ae03b0bd600fb8c7a44c8bef74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724959085658194261,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82b78fe84e206c02eb995f9d886b23c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fba634ef-3847-408a-84a1-9864124c57af name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:25:30 ha-505269 crio[665]: time="2024-08-29 19:25:30.473925698Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f64bb8f1-bacf-4c5f-8405-ab19384b37ad name=/runtime.v1.RuntimeService/Version
	Aug 29 19:25:30 ha-505269 crio[665]: time="2024-08-29 19:25:30.474086154Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f64bb8f1-bacf-4c5f-8405-ab19384b37ad name=/runtime.v1.RuntimeService/Version
	Aug 29 19:25:30 ha-505269 crio[665]: time="2024-08-29 19:25:30.475272382Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c8525538-4eb8-48f7-a905-b4908c020d24 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:25:30 ha-505269 crio[665]: time="2024-08-29 19:25:30.475715806Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959530475693241,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8525538-4eb8-48f7-a905-b4908c020d24 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:25:30 ha-505269 crio[665]: time="2024-08-29 19:25:30.476599407Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6cab9a80-b0b7-4d24-9a07-f18fa2f24d58 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:25:30 ha-505269 crio[665]: time="2024-08-29 19:25:30.476652338Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6cab9a80-b0b7-4d24-9a07-f18fa2f24d58 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:25:30 ha-505269 crio[665]: time="2024-08-29 19:25:30.476874185Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ed600112468d9762d0ad5c0e554ca486c5cfc592114271d25cd84f5c09187db,PodSandboxId:02692297ba9f1af58d950a27f62a82af78a2a057750241400caef23a8bf2b2b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724959256520731552,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-psss7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69c11597-6cac-437a-9860-fc1a66cdc304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29d7e6c72fdaaad5da65548ec44c18b60db3921c4ad2b63c9d767f1cc2c3fa75,PodSandboxId:f43b8211e2c7357fd7c4a4db43182c72f928a273075ca6fe8b80aedc84c67fac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959112076509057,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qjgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12168097-2d3c-467a-b4b5-c0ca7f85e4eb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc1a33f68ce7f972b7d1dd4dc36ce496d6339b1dc6a351d35cbb255ed61e8bd,PodSandboxId:6b5276c7cbe294143915bdc30d76458310de02fd916991c7782efc2e33f3190b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959112071160839,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bqqq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
801d9cfa-e1ad-4b31-9803-0030543fdc9e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccbac4cefd2d1e052f3d856dea541761ee436725097d33698eed254a59c810fe,PodSandboxId:2c5d2aad5519947556e7e1a184c260499dd950c4ebd176a2189e8fc06fa32cfa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724959111958933793,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7cd00a-94da-4e42-b7ae-289aab759c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5e9dd792be09a6e1909afd5dca42b303bac8476ef04f3254480b0f21ac53604,PodSandboxId:1f6e4f500f959ede186c4a9f85bff38a45872b813a20d08db40395e528d840a1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724959100077619706,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7rp6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c922b32-e666-4b00-ab65-505632346112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0cc96d9477c950d00d6920b1c4466bfd2e75dc0980383cdf073b75f577c37c,PodSandboxId:303aabbca3328ef64356c8322b3e76f6f3d1d35af5f892a4bec46277b7c9dd3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172495909
7303115391,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88a504e-122b-4609-a0cc-4ad3115b3e4e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:066b1cbdd3861a8cf28333b764086e47ace1987acec778ff2b2d0aa1973af37a,PodSandboxId:b5a438045598a2d267d39e01f1e41df597c13d1ad48368887838f09efcda52ac,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172495908828
3659292,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa47351bb1c351808ace9dd407df7743,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52fd2d668a925e55c334400b5d61dde55232d771983a9073e97f0ff37a6062fc,PodSandboxId:9d45b7206f46e64c0d5913c20a97148e0ac19d155b7ce7d3e371c593e763c4d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724959085808486991,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2658d81c7919220d900309ffd29970c4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:960e616b3c0582c8ccb0ec4fea8d02c0e2bfd2d0f81a273ac203cbf5eb6e4d6c,PodSandboxId:ac333ce918ddeec2a3499825104982d2e3230fd22b85b6e99bace076fdf6e1dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724959085742080078,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840e9d9d59afee1514ac6551d154c955,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1f91ce133bedd34081facf255eed45036eacfd938fbf58545d302a01d638dc0,PodSandboxId:8cfa0a246feb47d168af1872d1544f0b93affe0730c14516b4367b835d78d328,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724959085672946345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,
io.kubernetes.pod.name: kube-scheduler-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceba94f2170a08ee5a3d92beb3c9ffca,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b2531e5990db96ea59b71af6b9036995e91f182477ba5d381b96877d38e296,PodSandboxId:0e21f73ac8e2264b4a929fcd33f71b1ffda374ae03b0bd600fb8c7a44c8bef74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724959085658194261,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82b78fe84e206c02eb995f9d886b23c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6cab9a80-b0b7-4d24-9a07-f18fa2f24d58 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7ed600112468d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   02692297ba9f1       busybox-7dff88458-psss7
	29d7e6c72fdaa       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   f43b8211e2c73       coredns-6f6b679f8f-qjgfg
	1bc1a33f68ce7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   6b5276c7cbe29       coredns-6f6b679f8f-bqqq5
	ccbac4cefd2d1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   2c5d2aad55199       storage-provisioner
	f5e9dd792be09       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    7 minutes ago       Running             kindnet-cni               0                   1f6e4f500f959       kindnet-7rp6z
	9b0cc96d9477c       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      7 minutes ago       Running             kube-proxy                0                   303aabbca3328       kube-proxy-hx822
	066b1cbdd3861       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   b5a438045598a       kube-vip-ha-505269
	52fd2d668a925       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   9d45b7206f46e       etcd-ha-505269
	960e616b3c058       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      7 minutes ago       Running             kube-controller-manager   0                   ac333ce918dde       kube-controller-manager-ha-505269
	d1f91ce133bed       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      7 minutes ago       Running             kube-scheduler            0                   8cfa0a246feb4       kube-scheduler-ha-505269
	65b2531e5990d       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      7 minutes ago       Running             kube-apiserver            0                   0e21f73ac8e22       kube-apiserver-ha-505269
	
	
	==> coredns [1bc1a33f68ce7f972b7d1dd4dc36ce496d6339b1dc6a351d35cbb255ed61e8bd] <==
	[INFO] 10.244.2.2:51225 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000283224s
	[INFO] 10.244.2.2:42081 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.040666885s
	[INFO] 10.244.2.2:56495 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000200893s
	[INFO] 10.244.1.2:37640 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000114272s
	[INFO] 10.244.1.2:53661 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000151146s
	[INFO] 10.244.1.2:33472 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001240454s
	[INFO] 10.244.1.2:57944 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155411s
	[INFO] 10.244.0.4:39369 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096421s
	[INFO] 10.244.0.4:46246 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077614s
	[INFO] 10.244.0.4:49913 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000073523s
	[INFO] 10.244.0.4:48970 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00128236s
	[INFO] 10.244.0.4:55431 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000054415s
	[INFO] 10.244.0.4:54011 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000100096s
	[INFO] 10.244.0.4:57804 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008517s
	[INFO] 10.244.2.2:41131 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117965s
	[INFO] 10.244.1.2:45186 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106338s
	[INFO] 10.244.1.2:55754 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090657s
	[INFO] 10.244.0.4:56674 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071901s
	[INFO] 10.244.2.2:38366 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108693s
	[INFO] 10.244.2.2:46323 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000210179s
	[INFO] 10.244.1.2:45861 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134603s
	[INFO] 10.244.1.2:56113 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000085692s
	[INFO] 10.244.1.2:56364 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124593s
	[INFO] 10.244.1.2:47826 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121887s
	[INFO] 10.244.0.4:45102 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000150401s
	
	
	==> coredns [29d7e6c72fdaaad5da65548ec44c18b60db3921c4ad2b63c9d767f1cc2c3fa75] <==
	[INFO] 10.244.0.4:54551 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000109037s
	[INFO] 10.244.0.4:40210 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001965246s
	[INFO] 10.244.2.2:39736 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000182353s
	[INFO] 10.244.2.2:34550 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003442197s
	[INFO] 10.244.2.2:57439 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000145221s
	[INFO] 10.244.2.2:51088 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000167632s
	[INFO] 10.244.2.2:52731 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000113512s
	[INFO] 10.244.1.2:53021 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120423s
	[INFO] 10.244.1.2:34110 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001929986s
	[INFO] 10.244.1.2:43142 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092944s
	[INFO] 10.244.1.2:53648 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107032s
	[INFO] 10.244.0.4:57451 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001845348s
	[INFO] 10.244.2.2:52124 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000171457s
	[INFO] 10.244.2.2:35561 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076513s
	[INFO] 10.244.2.2:43265 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081638s
	[INFO] 10.244.1.2:37225 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147344s
	[INFO] 10.244.1.2:48252 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148007s
	[INFO] 10.244.0.4:60295 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013086s
	[INFO] 10.244.0.4:48577 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072897s
	[INFO] 10.244.0.4:48965 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087209s
	[INFO] 10.244.2.2:54597 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00016109s
	[INFO] 10.244.2.2:38187 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000150915s
	[INFO] 10.244.0.4:36462 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093452s
	[INFO] 10.244.0.4:43748 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071292s
	[INFO] 10.244.0.4:55783 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000059972s
	
	
	==> describe nodes <==
	Name:               ha-505269
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-505269
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=ha-505269
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T19_18_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:18:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-505269
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:25:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:21:18 +0000   Thu, 29 Aug 2024 19:18:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:21:18 +0000   Thu, 29 Aug 2024 19:18:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:21:18 +0000   Thu, 29 Aug 2024 19:18:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:21:18 +0000   Thu, 29 Aug 2024 19:18:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.56
	  Hostname:    ha-505269
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fddeecce7ac74aa7bff3cef388a156b1
	  System UUID:                fddeecce-7ac7-4aa7-bff3-cef388a156b1
	  Boot ID:                    1446f3e5-6319-4e2f-82e2-8ba9409f038f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-psss7              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  kube-system                 coredns-6f6b679f8f-bqqq5             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m14s
	  kube-system                 coredns-6f6b679f8f-qjgfg             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m14s
	  kube-system                 etcd-ha-505269                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m16s
	  kube-system                 kindnet-7rp6z                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m14s
	  kube-system                 kube-apiserver-ha-505269             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m16s
	  kube-system                 kube-controller-manager-ha-505269    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m16s
	  kube-system                 kube-proxy-hx822                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m14s
	  kube-system                 kube-scheduler-ha-505269             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m16s
	  kube-system                 kube-vip-ha-505269                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m16s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m13s                  kube-proxy       
	  Normal  Starting                 7m26s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     7m25s (x7 over 7m26s)  kubelet          Node ha-505269 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m25s (x8 over 7m26s)  kubelet          Node ha-505269 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m25s (x8 over 7m26s)  kubelet          Node ha-505269 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m16s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m16s                  kubelet          Node ha-505269 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m16s                  kubelet          Node ha-505269 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m16s                  kubelet          Node ha-505269 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m15s                  node-controller  Node ha-505269 event: Registered Node ha-505269 in Controller
	  Normal  NodeReady                6m59s                  kubelet          Node ha-505269 status is now: NodeReady
	  Normal  RegisteredNode           6m13s                  node-controller  Node ha-505269 event: Registered Node ha-505269 in Controller
	  Normal  RegisteredNode           4m56s                  node-controller  Node ha-505269 event: Registered Node ha-505269 in Controller
	
	
	Name:               ha-505269-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-505269-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=ha-505269
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_29T19_19_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:19:09 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-505269-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:22:02 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 29 Aug 2024 19:21:12 +0000   Thu, 29 Aug 2024 19:22:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 29 Aug 2024 19:21:12 +0000   Thu, 29 Aug 2024 19:22:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 29 Aug 2024 19:21:12 +0000   Thu, 29 Aug 2024 19:22:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 29 Aug 2024 19:21:12 +0000   Thu, 29 Aug 2024 19:22:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-505269-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc422cc060b34981a3c71775f3af90fa
	  System UUID:                dc422cc0-60b3-4981-a3c7-1775f3af90fa
	  Boot ID:                    b7d47e7c-23e6-4f3e-94a2-225e21964c8c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hcgzg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  kube-system                 etcd-ha-505269-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m19s
	  kube-system                 kindnet-sthc8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m21s
	  kube-system                 kube-apiserver-ha-505269-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-controller-manager-ha-505269-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-proxy-jxbdt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-scheduler-ha-505269-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 kube-vip-ha-505269-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m17s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m21s (x8 over 6m21s)  kubelet          Node ha-505269-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m21s (x8 over 6m21s)  kubelet          Node ha-505269-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m21s (x7 over 6m21s)  kubelet          Node ha-505269-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m20s                  node-controller  Node ha-505269-m02 event: Registered Node ha-505269-m02 in Controller
	  Normal  RegisteredNode           6m13s                  node-controller  Node ha-505269-m02 event: Registered Node ha-505269-m02 in Controller
	  Normal  RegisteredNode           4m56s                  node-controller  Node ha-505269-m02 event: Registered Node ha-505269-m02 in Controller
	  Normal  NodeNotReady             2m46s                  node-controller  Node ha-505269-m02 status is now: NodeNotReady
	
	
	Name:               ha-505269-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-505269-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=ha-505269
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_29T19_20_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:20:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-505269-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:25:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:21:27 +0000   Thu, 29 Aug 2024 19:20:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:21:27 +0000   Thu, 29 Aug 2024 19:20:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:21:27 +0000   Thu, 29 Aug 2024 19:20:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:21:27 +0000   Thu, 29 Aug 2024 19:20:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.178
	  Hostname:    ha-505269-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7fc042d3e84d419187ce4fd6ad6a07e3
	  System UUID:                7fc042d3-e84d-4191-87ce-4fd6ad6a07e3
	  Boot ID:                    cd9654bc-a3a1-4a00-b205-32dcc7ba1371
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2fh45                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  kube-system                 etcd-ha-505269-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m3s
	  kube-system                 kindnet-lr2lx                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m5s
	  kube-system                 kube-apiserver-ha-505269-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-controller-manager-ha-505269-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-proxy-s6zxk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-scheduler-ha-505269-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 kube-vip-ha-505269-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m                   kube-proxy       
	  Normal  RegisteredNode           5m5s                 node-controller  Node ha-505269-m03 event: Registered Node ha-505269-m03 in Controller
	  Normal  NodeHasSufficientMemory  5m5s (x8 over 5m5s)  kubelet          Node ha-505269-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m5s (x8 over 5m5s)  kubelet          Node ha-505269-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m5s (x7 over 5m5s)  kubelet          Node ha-505269-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m3s                 node-controller  Node ha-505269-m03 event: Registered Node ha-505269-m03 in Controller
	  Normal  RegisteredNode           4m56s                node-controller  Node ha-505269-m03 event: Registered Node ha-505269-m03 in Controller
	
	
	Name:               ha-505269-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-505269-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=ha-505269
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_29T19_21_30_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:21:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-505269-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:25:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:22:00 +0000   Thu, 29 Aug 2024 19:21:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:22:00 +0000   Thu, 29 Aug 2024 19:21:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:22:00 +0000   Thu, 29 Aug 2024 19:21:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:22:00 +0000   Thu, 29 Aug 2024 19:21:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    ha-505269-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a48c94cc9aca47538967ceed34ba2fed
	  System UUID:                a48c94cc-9aca-4753-8967-ceed34ba2fed
	  Boot ID:                    94ee343a-8690-48fe-a78a-9f0eed928227
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5lkbf       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m
	  kube-system                 kube-proxy-b5p66    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 3m55s            kube-proxy       
	  Normal  NodeHasSufficientMemory  4m (x2 over 4m)  kubelet          Node ha-505269-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m (x2 over 4m)  kubelet          Node ha-505269-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m (x2 over 4m)  kubelet          Node ha-505269-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m59s            node-controller  Node ha-505269-m04 event: Registered Node ha-505269-m04 in Controller
	  Normal  RegisteredNode           3m57s            node-controller  Node ha-505269-m04 event: Registered Node ha-505269-m04 in Controller
	  Normal  RegisteredNode           3m56s            node-controller  Node ha-505269-m04 event: Registered Node ha-505269-m04 in Controller
	  Normal  NodeReady                3m41s            kubelet          Node ha-505269-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug29 19:17] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050955] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040050] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.788550] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.497610] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.581059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.251780] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.063740] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055892] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.200106] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.121292] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.278954] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.975871] systemd-fstab-generator[753]: Ignoring "noauto" option for root device
	[Aug29 19:18] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.064250] kauditd_printk_skb: 158 callbacks suppressed
	[  +8.795220] kauditd_printk_skb: 79 callbacks suppressed
	[  +1.256542] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +6.194607] kauditd_printk_skb: 54 callbacks suppressed
	[Aug29 19:19] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [52fd2d668a925e55c334400b5d61dde55232d771983a9073e97f0ff37a6062fc] <==
	{"level":"warn","ts":"2024-08-29T19:25:30.751869Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:25:30.760426Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:25:30.769309Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:25:30.775368Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:25:30.778456Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:25:30.782043Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:25:30.789239Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:25:30.795546Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:25:30.801675Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:25:30.805447Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:25:30.809179Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:25:30.809323Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:25:30.816015Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:25:30.822452Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:25:30.828087Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:25:30.832283Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:25:30.835426Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:25:30.838576Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:25:30.839403Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:25:30.845024Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:25:30.848380Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:25:30.849944Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:25:30.852026Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:25:30.856058Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:25:30.909044Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:25:30 up 7 min,  0 users,  load average: 0.61, 0.56, 0.31
	Linux ha-505269 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f5e9dd792be09a6e1909afd5dca42b303bac8476ef04f3254480b0f21ac53604] <==
	I0829 19:24:51.259723       1 main.go:322] Node ha-505269-m04 has CIDR [10.244.3.0/24] 
	I0829 19:25:01.263497       1 main.go:295] Handling node with IPs: map[192.168.39.178:{}]
	I0829 19:25:01.263685       1 main.go:322] Node ha-505269-m03 has CIDR [10.244.2.0/24] 
	I0829 19:25:01.264066       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0829 19:25:01.264167       1 main.go:322] Node ha-505269-m04 has CIDR [10.244.3.0/24] 
	I0829 19:25:01.264380       1 main.go:295] Handling node with IPs: map[192.168.39.56:{}]
	I0829 19:25:01.264483       1 main.go:299] handling current node
	I0829 19:25:01.264568       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0829 19:25:01.264662       1 main.go:322] Node ha-505269-m02 has CIDR [10.244.1.0/24] 
	I0829 19:25:11.267167       1 main.go:295] Handling node with IPs: map[192.168.39.56:{}]
	I0829 19:25:11.267212       1 main.go:299] handling current node
	I0829 19:25:11.267237       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0829 19:25:11.267243       1 main.go:322] Node ha-505269-m02 has CIDR [10.244.1.0/24] 
	I0829 19:25:11.267381       1 main.go:295] Handling node with IPs: map[192.168.39.178:{}]
	I0829 19:25:11.267406       1 main.go:322] Node ha-505269-m03 has CIDR [10.244.2.0/24] 
	I0829 19:25:11.267533       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0829 19:25:11.267572       1 main.go:322] Node ha-505269-m04 has CIDR [10.244.3.0/24] 
	I0829 19:25:21.258116       1 main.go:295] Handling node with IPs: map[192.168.39.56:{}]
	I0829 19:25:21.258226       1 main.go:299] handling current node
	I0829 19:25:21.258272       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0829 19:25:21.258301       1 main.go:322] Node ha-505269-m02 has CIDR [10.244.1.0/24] 
	I0829 19:25:21.258476       1 main.go:295] Handling node with IPs: map[192.168.39.178:{}]
	I0829 19:25:21.258523       1 main.go:322] Node ha-505269-m03 has CIDR [10.244.2.0/24] 
	I0829 19:25:21.258673       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0829 19:25:21.258706       1 main.go:322] Node ha-505269-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [65b2531e5990db96ea59b71af6b9036995e91f182477ba5d381b96877d38e296] <==
	I0829 19:18:14.422957       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0829 19:18:14.436643       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0829 19:18:14.444895       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0829 19:18:16.522373       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0829 19:18:16.728762       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0829 19:20:26.726355       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0829 19:20:26.726657       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 10.582µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0829 19:20:26.727797       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0829 19:20:26.729075       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0829 19:20:26.730327       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.000383ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0829 19:20:57.644511       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55132: use of closed network connection
	E0829 19:20:57.838367       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55144: use of closed network connection
	E0829 19:20:58.019896       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55166: use of closed network connection
	E0829 19:20:58.240285       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55194: use of closed network connection
	E0829 19:20:58.423371       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55222: use of closed network connection
	E0829 19:20:58.606211       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55246: use of closed network connection
	E0829 19:20:58.781714       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55270: use of closed network connection
	E0829 19:20:58.954357       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55280: use of closed network connection
	E0829 19:20:59.125211       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55290: use of closed network connection
	E0829 19:20:59.418090       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55312: use of closed network connection
	E0829 19:20:59.592142       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55330: use of closed network connection
	E0829 19:20:59.762687       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55350: use of closed network connection
	E0829 19:20:59.935901       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55374: use of closed network connection
	E0829 19:21:00.111552       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55396: use of closed network connection
	E0829 19:21:00.285217       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55412: use of closed network connection
	
	
	==> kube-controller-manager [960e616b3c0582c8ccb0ec4fea8d02c0e2bfd2d0f81a273ac203cbf5eb6e4d6c] <==
	I0829 19:21:30.427794       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-505269-m04" podCIDRs=["10.244.3.0/24"]
	I0829 19:21:30.427933       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:21:30.428057       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:21:30.443076       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:21:30.519078       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:21:30.945173       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:21:31.001063       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-505269-m04"
	I0829 19:21:31.066448       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:21:33.127656       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:21:33.238447       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:21:34.634773       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:21:34.717155       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:21:40.501644       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:21:49.774119       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-505269-m04"
	I0829 19:21:49.774408       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:21:49.792045       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:21:51.020797       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:22:00.645170       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:22:44.664748       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m02"
	I0829 19:22:44.664838       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-505269-m04"
	I0829 19:22:44.688247       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m02"
	I0829 19:22:44.765526       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="19.581395ms"
	I0829 19:22:44.765899       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="98.463µs"
	I0829 19:22:46.068957       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m02"
	I0829 19:22:49.919964       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m02"
	
	
	==> kube-proxy [9b0cc96d9477c950d00d6920b1c4466bfd2e75dc0980383cdf073b75f577c37c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 19:18:17.512883       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 19:18:17.526500       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.56"]
	E0829 19:18:17.526650       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 19:18:17.569558       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 19:18:17.569593       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 19:18:17.569654       1 server_linux.go:169] "Using iptables Proxier"
	I0829 19:18:17.573082       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 19:18:17.573395       1 server.go:483] "Version info" version="v1.31.0"
	I0829 19:18:17.573558       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:18:17.576139       1 config.go:197] "Starting service config controller"
	I0829 19:18:17.576284       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 19:18:17.576358       1 config.go:104] "Starting endpoint slice config controller"
	I0829 19:18:17.576385       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 19:18:17.579686       1 config.go:326] "Starting node config controller"
	I0829 19:18:17.579742       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 19:18:17.677501       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 19:18:17.677687       1 shared_informer.go:320] Caches are synced for service config
	I0829 19:18:17.680371       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d1f91ce133bedd34081facf255eed45036eacfd938fbf58545d302a01d638dc0] <==
	E0829 19:18:10.456254       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0829 19:18:12.513942       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0829 19:20:54.912335       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hcgzg\": pod busybox-7dff88458-hcgzg is already assigned to node \"ha-505269-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-hcgzg" node="ha-505269-m02"
	E0829 19:20:54.912578       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c2ae2636-418f-474d-8cc0-8b35b6a63726(default/busybox-7dff88458-hcgzg) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-hcgzg"
	E0829 19:20:54.912642       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hcgzg\": pod busybox-7dff88458-hcgzg is already assigned to node \"ha-505269-m02\"" pod="default/busybox-7dff88458-hcgzg"
	I0829 19:20:54.912696       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-hcgzg" node="ha-505269-m02"
	E0829 19:20:54.968272       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-2fh45\": pod busybox-7dff88458-2fh45 is already assigned to node \"ha-505269-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-2fh45" node="ha-505269-m03"
	E0829 19:20:54.968416       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 0dad74f1-a221-4897-96b7-109169b8c6d0(default/busybox-7dff88458-2fh45) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-2fh45"
	E0829 19:20:54.968500       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-2fh45\": pod busybox-7dff88458-2fh45 is already assigned to node \"ha-505269-m03\"" pod="default/busybox-7dff88458-2fh45"
	I0829 19:20:54.968611       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-2fh45" node="ha-505269-m03"
	E0829 19:21:30.486227       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-czplj\": pod kindnet-czplj is already assigned to node \"ha-505269-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-czplj" node="ha-505269-m04"
	E0829 19:21:30.486338       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-czplj\": pod kindnet-czplj is already assigned to node \"ha-505269-m04\"" pod="kube-system/kindnet-czplj"
	I0829 19:21:30.486392       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-czplj" node="ha-505269-m04"
	E0829 19:21:30.486895       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-b5p66\": pod kube-proxy-b5p66 is already assigned to node \"ha-505269-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-b5p66" node="ha-505269-m04"
	E0829 19:21:30.487063       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f908ff83-8bf9-44a0-bda1-98f00b910faa(kube-system/kube-proxy-b5p66) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-b5p66"
	E0829 19:21:30.487084       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-b5p66\": pod kube-proxy-b5p66 is already assigned to node \"ha-505269-m04\"" pod="kube-system/kube-proxy-b5p66"
	I0829 19:21:30.487103       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-b5p66" node="ha-505269-m04"
	E0829 19:21:30.554445       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-shg8j\": pod kube-proxy-shg8j is already assigned to node \"ha-505269-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-shg8j" node="ha-505269-m04"
	E0829 19:21:30.554616       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 05405fa6-d40f-446d-ad32-18b243d7b162(kube-system/kube-proxy-shg8j) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-shg8j"
	E0829 19:21:30.554728       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-shg8j\": pod kube-proxy-shg8j is already assigned to node \"ha-505269-m04\"" pod="kube-system/kube-proxy-shg8j"
	I0829 19:21:30.554863       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-shg8j" node="ha-505269-m04"
	E0829 19:21:30.555526       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5lkbf\": pod kindnet-5lkbf is already assigned to node \"ha-505269-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-5lkbf" node="ha-505269-m04"
	E0829 19:21:30.558296       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 112e2462-a26a-4f91-a405-dab3468f9071(kube-system/kindnet-5lkbf) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-5lkbf"
	E0829 19:21:30.559049       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5lkbf\": pod kindnet-5lkbf is already assigned to node \"ha-505269-m04\"" pod="kube-system/kindnet-5lkbf"
	I0829 19:21:30.559106       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-5lkbf" node="ha-505269-m04"
	
	
	==> kubelet <==
	Aug 29 19:24:14 ha-505269 kubelet[1314]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 19:24:14 ha-505269 kubelet[1314]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 19:24:14 ha-505269 kubelet[1314]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 19:24:14 ha-505269 kubelet[1314]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 19:24:14 ha-505269 kubelet[1314]: E0829 19:24:14.487395    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959454486775043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:24:14 ha-505269 kubelet[1314]: E0829 19:24:14.487456    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959454486775043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:24:24 ha-505269 kubelet[1314]: E0829 19:24:24.489243    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959464488866465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:24:24 ha-505269 kubelet[1314]: E0829 19:24:24.489290    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959464488866465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:24:34 ha-505269 kubelet[1314]: E0829 19:24:34.491186    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959474490853273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:24:34 ha-505269 kubelet[1314]: E0829 19:24:34.491233    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959474490853273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:24:44 ha-505269 kubelet[1314]: E0829 19:24:44.493574    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959484493111606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:24:44 ha-505269 kubelet[1314]: E0829 19:24:44.494101    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959484493111606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:24:54 ha-505269 kubelet[1314]: E0829 19:24:54.495930    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959494495397106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:24:54 ha-505269 kubelet[1314]: E0829 19:24:54.496495    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959494495397106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:25:04 ha-505269 kubelet[1314]: E0829 19:25:04.498783    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959504498437195,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:25:04 ha-505269 kubelet[1314]: E0829 19:25:04.498832    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959504498437195,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:25:14 ha-505269 kubelet[1314]: E0829 19:25:14.380880    1314 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 19:25:14 ha-505269 kubelet[1314]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 19:25:14 ha-505269 kubelet[1314]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 19:25:14 ha-505269 kubelet[1314]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 19:25:14 ha-505269 kubelet[1314]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 19:25:14 ha-505269 kubelet[1314]: E0829 19:25:14.501610    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959514500884715,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:25:14 ha-505269 kubelet[1314]: E0829 19:25:14.501658    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959514500884715,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:25:24 ha-505269 kubelet[1314]: E0829 19:25:24.503874    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959524503458961,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:25:24 ha-505269 kubelet[1314]: E0829 19:25:24.503925    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959524503458961,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-505269 -n ha-505269
helpers_test.go:261: (dbg) Run:  kubectl --context ha-505269 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (56.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (394.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-505269 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-505269 -v=7 --alsologtostderr
E0829 19:26:37.942795   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:27:05.646320   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-505269 -v=7 --alsologtostderr: exit status 82 (2m1.891817942s)

                                                
                                                
-- stdout --
	* Stopping node "ha-505269-m04"  ...
	* Stopping node "ha-505269-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:25:32.263084   36164 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:25:32.263313   36164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:25:32.263321   36164 out.go:358] Setting ErrFile to fd 2...
	I0829 19:25:32.263325   36164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:25:32.263500   36164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 19:25:32.263711   36164 out.go:352] Setting JSON to false
	I0829 19:25:32.263791   36164 mustload.go:65] Loading cluster: ha-505269
	I0829 19:25:32.264136   36164 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:25:32.264222   36164 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/config.json ...
	I0829 19:25:32.264407   36164 mustload.go:65] Loading cluster: ha-505269
	I0829 19:25:32.264534   36164 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:25:32.264558   36164 stop.go:39] StopHost: ha-505269-m04
	I0829 19:25:32.264974   36164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:32.265025   36164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:32.280021   36164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34821
	I0829 19:25:32.280425   36164 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:32.281020   36164 main.go:141] libmachine: Using API Version  1
	I0829 19:25:32.281040   36164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:32.281391   36164 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:32.283873   36164 out.go:177] * Stopping node "ha-505269-m04"  ...
	I0829 19:25:32.285236   36164 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0829 19:25:32.285272   36164 main.go:141] libmachine: (ha-505269-m04) Calling .DriverName
	I0829 19:25:32.285498   36164 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0829 19:25:32.285521   36164 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHHostname
	I0829 19:25:32.288249   36164 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:25:32.288554   36164 main.go:141] libmachine: (ha-505269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:46:e7", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:21:15 +0000 UTC Type:0 Mac:52:54:00:44:46:e7 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-505269-m04 Clientid:01:52:54:00:44:46:e7}
	I0829 19:25:32.288593   36164 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined IP address 192.168.39.101 and MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:25:32.288766   36164 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHPort
	I0829 19:25:32.288942   36164 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHKeyPath
	I0829 19:25:32.289113   36164 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHUsername
	I0829 19:25:32.289267   36164 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m04/id_rsa Username:docker}
	I0829 19:25:32.375361   36164 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0829 19:25:32.428053   36164 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0829 19:25:32.482609   36164 main.go:141] libmachine: Stopping "ha-505269-m04"...
	I0829 19:25:32.482632   36164 main.go:141] libmachine: (ha-505269-m04) Calling .GetState
	I0829 19:25:32.483999   36164 main.go:141] libmachine: (ha-505269-m04) Calling .Stop
	I0829 19:25:32.487526   36164 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 0/120
	I0829 19:25:33.696917   36164 main.go:141] libmachine: (ha-505269-m04) Calling .GetState
	I0829 19:25:33.698386   36164 main.go:141] libmachine: Machine "ha-505269-m04" was stopped.
	I0829 19:25:33.698401   36164 stop.go:75] duration metric: took 1.41316847s to stop
	I0829 19:25:33.698419   36164 stop.go:39] StopHost: ha-505269-m03
	I0829 19:25:33.698815   36164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:33.698870   36164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:33.713346   36164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41019
	I0829 19:25:33.713780   36164 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:33.714252   36164 main.go:141] libmachine: Using API Version  1
	I0829 19:25:33.714271   36164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:33.714575   36164 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:33.716613   36164 out.go:177] * Stopping node "ha-505269-m03"  ...
	I0829 19:25:33.717780   36164 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0829 19:25:33.717799   36164 main.go:141] libmachine: (ha-505269-m03) Calling .DriverName
	I0829 19:25:33.718006   36164 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0829 19:25:33.718023   36164 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHHostname
	I0829 19:25:33.720612   36164 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:25:33.721011   36164 main.go:141] libmachine: (ha-505269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:9f:90", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:19:49 +0000 UTC Type:0 Mac:52:54:00:19:9f:90 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-505269-m03 Clientid:01:52:54:00:19:9f:90}
	I0829 19:25:33.721034   36164 main.go:141] libmachine: (ha-505269-m03) DBG | domain ha-505269-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:19:9f:90 in network mk-ha-505269
	I0829 19:25:33.721181   36164 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHPort
	I0829 19:25:33.721352   36164 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHKeyPath
	I0829 19:25:33.721502   36164 main.go:141] libmachine: (ha-505269-m03) Calling .GetSSHUsername
	I0829 19:25:33.721617   36164 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m03/id_rsa Username:docker}
	I0829 19:25:33.807747   36164 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0829 19:25:33.861308   36164 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0829 19:25:33.920978   36164 main.go:141] libmachine: Stopping "ha-505269-m03"...
	I0829 19:25:33.921008   36164 main.go:141] libmachine: (ha-505269-m03) Calling .GetState
	I0829 19:25:33.922592   36164 main.go:141] libmachine: (ha-505269-m03) Calling .Stop
	I0829 19:25:33.925983   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 0/120
	I0829 19:25:34.927674   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 1/120
	I0829 19:25:35.928943   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 2/120
	I0829 19:25:36.930345   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 3/120
	I0829 19:25:37.931750   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 4/120
	I0829 19:25:38.933746   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 5/120
	I0829 19:25:39.935201   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 6/120
	I0829 19:25:40.936692   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 7/120
	I0829 19:25:41.938088   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 8/120
	I0829 19:25:42.939724   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 9/120
	I0829 19:25:43.941808   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 10/120
	I0829 19:25:44.943315   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 11/120
	I0829 19:25:45.944570   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 12/120
	I0829 19:25:46.946007   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 13/120
	I0829 19:25:47.947628   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 14/120
	I0829 19:25:48.949279   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 15/120
	I0829 19:25:49.950468   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 16/120
	I0829 19:25:50.951968   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 17/120
	I0829 19:25:51.953511   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 18/120
	I0829 19:25:52.955112   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 19/120
	I0829 19:25:53.957064   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 20/120
	I0829 19:25:54.958626   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 21/120
	I0829 19:25:55.960256   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 22/120
	I0829 19:25:56.961591   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 23/120
	I0829 19:25:57.963152   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 24/120
	I0829 19:25:58.965605   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 25/120
	I0829 19:25:59.967251   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 26/120
	I0829 19:26:00.968842   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 27/120
	I0829 19:26:01.970418   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 28/120
	I0829 19:26:02.972059   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 29/120
	I0829 19:26:03.973239   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 30/120
	I0829 19:26:04.974945   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 31/120
	I0829 19:26:05.977052   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 32/120
	I0829 19:26:06.978713   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 33/120
	I0829 19:26:07.980363   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 34/120
	I0829 19:26:08.981832   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 35/120
	I0829 19:26:09.983209   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 36/120
	I0829 19:26:10.984399   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 37/120
	I0829 19:26:11.985737   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 38/120
	I0829 19:26:12.986902   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 39/120
	I0829 19:26:13.988600   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 40/120
	I0829 19:26:14.989944   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 41/120
	I0829 19:26:15.991300   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 42/120
	I0829 19:26:16.992784   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 43/120
	I0829 19:26:17.994046   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 44/120
	I0829 19:26:18.995445   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 45/120
	I0829 19:26:19.996632   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 46/120
	I0829 19:26:20.998215   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 47/120
	I0829 19:26:21.999596   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 48/120
	I0829 19:26:23.000993   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 49/120
	I0829 19:26:24.002406   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 50/120
	I0829 19:26:25.003837   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 51/120
	I0829 19:26:26.005450   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 52/120
	I0829 19:26:27.006906   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 53/120
	I0829 19:26:28.008299   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 54/120
	I0829 19:26:29.009819   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 55/120
	I0829 19:26:30.011180   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 56/120
	I0829 19:26:31.013139   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 57/120
	I0829 19:26:32.014443   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 58/120
	I0829 19:26:33.015708   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 59/120
	I0829 19:26:34.017385   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 60/120
	I0829 19:26:35.018705   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 61/120
	I0829 19:26:36.019981   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 62/120
	I0829 19:26:37.021305   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 63/120
	I0829 19:26:38.022677   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 64/120
	I0829 19:26:39.024469   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 65/120
	I0829 19:26:40.025743   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 66/120
	I0829 19:26:41.027291   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 67/120
	I0829 19:26:42.028512   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 68/120
	I0829 19:26:43.029875   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 69/120
	I0829 19:26:44.031663   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 70/120
	I0829 19:26:45.032971   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 71/120
	I0829 19:26:46.034293   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 72/120
	I0829 19:26:47.035573   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 73/120
	I0829 19:26:48.036963   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 74/120
	I0829 19:26:49.038739   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 75/120
	I0829 19:26:50.039977   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 76/120
	I0829 19:26:51.041487   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 77/120
	I0829 19:26:52.043050   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 78/120
	I0829 19:26:53.045023   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 79/120
	I0829 19:26:54.046922   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 80/120
	I0829 19:26:55.048439   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 81/120
	I0829 19:26:56.049787   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 82/120
	I0829 19:26:57.050984   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 83/120
	I0829 19:26:58.052475   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 84/120
	I0829 19:26:59.054003   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 85/120
	I0829 19:27:00.055453   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 86/120
	I0829 19:27:01.057155   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 87/120
	I0829 19:27:02.058755   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 88/120
	I0829 19:27:03.060165   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 89/120
	I0829 19:27:04.061988   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 90/120
	I0829 19:27:05.063454   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 91/120
	I0829 19:27:06.064659   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 92/120
	I0829 19:27:07.066077   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 93/120
	I0829 19:27:08.067503   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 94/120
	I0829 19:27:09.069091   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 95/120
	I0829 19:27:10.070424   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 96/120
	I0829 19:27:11.071703   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 97/120
	I0829 19:27:12.072900   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 98/120
	I0829 19:27:13.074169   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 99/120
	I0829 19:27:14.075942   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 100/120
	I0829 19:27:15.077225   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 101/120
	I0829 19:27:16.078581   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 102/120
	I0829 19:27:17.080490   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 103/120
	I0829 19:27:18.081764   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 104/120
	I0829 19:27:19.083718   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 105/120
	I0829 19:27:20.084944   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 106/120
	I0829 19:27:21.086243   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 107/120
	I0829 19:27:22.087837   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 108/120
	I0829 19:27:23.089133   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 109/120
	I0829 19:27:24.091458   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 110/120
	I0829 19:27:25.092849   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 111/120
	I0829 19:27:26.094342   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 112/120
	I0829 19:27:27.096223   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 113/120
	I0829 19:27:28.097579   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 114/120
	I0829 19:27:29.099215   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 115/120
	I0829 19:27:30.100595   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 116/120
	I0829 19:27:31.101825   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 117/120
	I0829 19:27:32.103084   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 118/120
	I0829 19:27:33.104542   36164 main.go:141] libmachine: (ha-505269-m03) Waiting for machine to stop 119/120
	I0829 19:27:34.105802   36164 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0829 19:27:34.105865   36164 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0829 19:27:34.107600   36164 out.go:201] 
	W0829 19:27:34.108982   36164 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0829 19:27:34.108996   36164 out.go:270] * 
	* 
	W0829 19:27:34.111159   36164 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 19:27:34.112415   36164 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-505269 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-505269 --wait=true -v=7 --alsologtostderr
E0829 19:28:45.975803   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:09.039018   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:37.943600   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-505269 --wait=true -v=7 --alsologtostderr: (4m30.302795936s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-505269
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-505269 -n ha-505269
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-505269 logs -n 25: (1.898197859s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-505269 cp ha-505269-m03:/home/docker/cp-test.txt                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m02:/home/docker/cp-test_ha-505269-m03_ha-505269-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n ha-505269-m02 sudo cat                                          | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | /home/docker/cp-test_ha-505269-m03_ha-505269-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-505269 cp ha-505269-m03:/home/docker/cp-test.txt                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m04:/home/docker/cp-test_ha-505269-m03_ha-505269-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n ha-505269-m04 sudo cat                                          | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | /home/docker/cp-test_ha-505269-m03_ha-505269-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-505269 cp testdata/cp-test.txt                                                | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-505269 cp ha-505269-m04:/home/docker/cp-test.txt                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3454359662/001/cp-test_ha-505269-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-505269 cp ha-505269-m04:/home/docker/cp-test.txt                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269:/home/docker/cp-test_ha-505269-m04_ha-505269.txt                       |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n ha-505269 sudo cat                                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | /home/docker/cp-test_ha-505269-m04_ha-505269.txt                                 |           |         |         |                     |                     |
	| cp      | ha-505269 cp ha-505269-m04:/home/docker/cp-test.txt                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m02:/home/docker/cp-test_ha-505269-m04_ha-505269-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n ha-505269-m02 sudo cat                                          | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | /home/docker/cp-test_ha-505269-m04_ha-505269-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-505269 cp ha-505269-m04:/home/docker/cp-test.txt                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m03:/home/docker/cp-test_ha-505269-m04_ha-505269-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n ha-505269-m03 sudo cat                                          | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | /home/docker/cp-test_ha-505269-m04_ha-505269-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-505269 node stop m02 -v=7                                                     | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-505269 node start m02 -v=7                                                    | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-505269 -v=7                                                           | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-505269 -v=7                                                                | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-505269 --wait=true -v=7                                                    | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:32 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-505269                                                                | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:32 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 19:27:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 19:27:34.156723   36600 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:27:34.156849   36600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:27:34.156859   36600 out.go:358] Setting ErrFile to fd 2...
	I0829 19:27:34.156865   36600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:27:34.157046   36600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 19:27:34.157600   36600 out.go:352] Setting JSON to false
	I0829 19:27:34.158498   36600 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4201,"bootTime":1724955453,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 19:27:34.158576   36600 start.go:139] virtualization: kvm guest
	I0829 19:27:34.160548   36600 out.go:177] * [ha-505269] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 19:27:34.162233   36600 notify.go:220] Checking for updates...
	I0829 19:27:34.162248   36600 out.go:177]   - MINIKUBE_LOCATION=19530
	I0829 19:27:34.163419   36600 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:27:34.164653   36600 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 19:27:34.166027   36600 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 19:27:34.167349   36600 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:27:34.168737   36600 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:27:34.170463   36600 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:27:34.170596   36600 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:27:34.171032   36600 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:27:34.171080   36600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:27:34.186487   36600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33113
	I0829 19:27:34.186937   36600 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:27:34.187523   36600 main.go:141] libmachine: Using API Version  1
	I0829 19:27:34.187552   36600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:27:34.187851   36600 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:27:34.188053   36600 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:27:34.224008   36600 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 19:27:34.225226   36600 start.go:297] selected driver: kvm2
	I0829 19:27:34.225241   36600 start.go:901] validating driver "kvm2" against &{Name:ha-505269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-505269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.178 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.101 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:27:34.225434   36600 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:27:34.225965   36600 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:27:34.226074   36600 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19530-11185/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 19:27:34.241238   36600 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 19:27:34.241959   36600 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:27:34.242031   36600 cni.go:84] Creating CNI manager for ""
	I0829 19:27:34.242046   36600 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0829 19:27:34.242110   36600 start.go:340] cluster config:
	{Name:ha-505269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-505269 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.178 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.101 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:27:34.242258   36600 iso.go:125] acquiring lock: {Name:mk1c9d3ac7f423dd4657884e37bdf4359f6328d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:27:34.244817   36600 out.go:177] * Starting "ha-505269" primary control-plane node in "ha-505269" cluster
	I0829 19:27:34.246083   36600 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:27:34.246116   36600 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 19:27:34.246128   36600 cache.go:56] Caching tarball of preloaded images
	I0829 19:27:34.246248   36600 preload.go:172] Found /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 19:27:34.246260   36600 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 19:27:34.246412   36600 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/config.json ...
	I0829 19:27:34.246712   36600 start.go:360] acquireMachinesLock for ha-505269: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:27:34.246764   36600 start.go:364] duration metric: took 32.038µs to acquireMachinesLock for "ha-505269"
	I0829 19:27:34.246785   36600 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:27:34.246792   36600 fix.go:54] fixHost starting: 
	I0829 19:27:34.247071   36600 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:27:34.247103   36600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:27:34.261275   36600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46881
	I0829 19:27:34.261670   36600 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:27:34.262094   36600 main.go:141] libmachine: Using API Version  1
	I0829 19:27:34.262113   36600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:27:34.262415   36600 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:27:34.262584   36600 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:27:34.262728   36600 main.go:141] libmachine: (ha-505269) Calling .GetState
	I0829 19:27:34.264285   36600 fix.go:112] recreateIfNeeded on ha-505269: state=Running err=<nil>
	W0829 19:27:34.264306   36600 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:27:34.266085   36600 out.go:177] * Updating the running kvm2 "ha-505269" VM ...
	I0829 19:27:34.267235   36600 machine.go:93] provisionDockerMachine start ...
	I0829 19:27:34.267255   36600 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:27:34.267446   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:27:34.269923   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:27:34.270388   36600 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:27:34.270414   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:27:34.270481   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:27:34.270644   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:27:34.270772   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:27:34.270886   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:27:34.271027   36600 main.go:141] libmachine: Using SSH client type: native
	I0829 19:27:34.271246   36600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0829 19:27:34.271264   36600 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:27:34.389813   36600 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-505269
	
	I0829 19:27:34.389845   36600 main.go:141] libmachine: (ha-505269) Calling .GetMachineName
	I0829 19:27:34.390128   36600 buildroot.go:166] provisioning hostname "ha-505269"
	I0829 19:27:34.390150   36600 main.go:141] libmachine: (ha-505269) Calling .GetMachineName
	I0829 19:27:34.390333   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:27:34.393198   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:27:34.393649   36600 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:27:34.393671   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:27:34.393833   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:27:34.394025   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:27:34.394277   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:27:34.394433   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:27:34.394642   36600 main.go:141] libmachine: Using SSH client type: native
	I0829 19:27:34.394793   36600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0829 19:27:34.394804   36600 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-505269 && echo "ha-505269" | sudo tee /etc/hostname
	I0829 19:27:34.529979   36600 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-505269
	
	I0829 19:27:34.530010   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:27:34.532863   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:27:34.533198   36600 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:27:34.533231   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:27:34.533401   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:27:34.533601   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:27:34.533788   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:27:34.533951   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:27:34.534151   36600 main.go:141] libmachine: Using SSH client type: native
	I0829 19:27:34.534369   36600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0829 19:27:34.534391   36600 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-505269' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-505269/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-505269' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:27:34.667723   36600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:27:34.667758   36600 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 19:27:34.667781   36600 buildroot.go:174] setting up certificates
	I0829 19:27:34.667790   36600 provision.go:84] configureAuth start
	I0829 19:27:34.667814   36600 main.go:141] libmachine: (ha-505269) Calling .GetMachineName
	I0829 19:27:34.668122   36600 main.go:141] libmachine: (ha-505269) Calling .GetIP
	I0829 19:27:34.670548   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:27:34.670933   36600 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:27:34.670961   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:27:34.671144   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:27:34.673229   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:27:34.673568   36600 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:27:34.673590   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:27:34.673717   36600 provision.go:143] copyHostCerts
	I0829 19:27:34.673742   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 19:27:34.673776   36600 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 19:27:34.673791   36600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 19:27:34.673873   36600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 19:27:34.673950   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 19:27:34.673967   36600 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 19:27:34.673973   36600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 19:27:34.673999   36600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 19:27:34.674037   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 19:27:34.674052   36600 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 19:27:34.674058   36600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 19:27:34.674078   36600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 19:27:34.674129   36600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.ha-505269 san=[127.0.0.1 192.168.39.56 ha-505269 localhost minikube]
	I0829 19:27:34.783941   36600 provision.go:177] copyRemoteCerts
	I0829 19:27:34.783990   36600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:27:34.784009   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:27:34.786636   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:27:34.786986   36600 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:27:34.787011   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:27:34.787211   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:27:34.787382   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:27:34.787519   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:27:34.787629   36600 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:27:34.877523   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0829 19:27:34.877591   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0829 19:27:34.904867   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0829 19:27:34.904945   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 19:27:34.931967   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0829 19:27:34.932034   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 19:27:34.959886   36600 provision.go:87] duration metric: took 292.081861ms to configureAuth
	I0829 19:27:34.959917   36600 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:27:34.960210   36600 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:27:34.960288   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:27:34.962964   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:27:34.963312   36600 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:27:34.963350   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:27:34.963530   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:27:34.963712   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:27:34.963866   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:27:34.963977   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:27:34.964109   36600 main.go:141] libmachine: Using SSH client type: native
	I0829 19:27:34.964307   36600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0829 19:27:34.964332   36600 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:29:05.699502   36600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:29:05.699530   36600 machine.go:96] duration metric: took 1m31.432282179s to provisionDockerMachine
	I0829 19:29:05.699542   36600 start.go:293] postStartSetup for "ha-505269" (driver="kvm2")
	I0829 19:29:05.699554   36600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:29:05.699579   36600 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:29:05.699956   36600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:29:05.700009   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:29:05.702886   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:29:05.703330   36600 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:29:05.703357   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:29:05.703515   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:29:05.703694   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:29:05.703912   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:29:05.704047   36600 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:29:05.794293   36600 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:29:05.798519   36600 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:29:05.798550   36600 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 19:29:05.798609   36600 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 19:29:05.798709   36600 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 19:29:05.798720   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> /etc/ssl/certs/183612.pem
	I0829 19:29:05.798824   36600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:29:05.807793   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 19:29:05.831937   36600 start.go:296] duration metric: took 132.38193ms for postStartSetup
	I0829 19:29:05.831977   36600 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:29:05.832300   36600 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0829 19:29:05.832324   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:29:05.835127   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:29:05.835516   36600 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:29:05.835542   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:29:05.835716   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:29:05.835895   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:29:05.836033   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:29:05.836169   36600 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	W0829 19:29:05.929415   36600 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0829 19:29:05.929437   36600 fix.go:56] duration metric: took 1m31.68264588s for fixHost
	I0829 19:29:05.929457   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:29:05.931990   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:29:05.932335   36600 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:29:05.932372   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:29:05.932534   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:29:05.932718   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:29:05.932881   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:29:05.933018   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:29:05.933178   36600 main.go:141] libmachine: Using SSH client type: native
	I0829 19:29:05.933375   36600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0829 19:29:05.933386   36600 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:29:06.047355   36600 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724959746.015956151
	
	I0829 19:29:06.047385   36600 fix.go:216] guest clock: 1724959746.015956151
	I0829 19:29:06.047397   36600 fix.go:229] Guest: 2024-08-29 19:29:06.015956151 +0000 UTC Remote: 2024-08-29 19:29:05.929444262 +0000 UTC m=+91.807712998 (delta=86.511889ms)
	I0829 19:29:06.047423   36600 fix.go:200] guest clock delta is within tolerance: 86.511889ms
	I0829 19:29:06.047431   36600 start.go:83] releasing machines lock for "ha-505269", held for 1m31.800652132s
	I0829 19:29:06.047459   36600 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:29:06.047699   36600 main.go:141] libmachine: (ha-505269) Calling .GetIP
	I0829 19:29:06.050421   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:29:06.050765   36600 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:29:06.050792   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:29:06.050941   36600 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:29:06.051461   36600 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:29:06.051592   36600 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:29:06.051704   36600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:29:06.051751   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:29:06.051781   36600 ssh_runner.go:195] Run: cat /version.json
	I0829 19:29:06.051804   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:29:06.054178   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:29:06.054611   36600 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:29:06.054637   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:29:06.054654   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:29:06.054758   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:29:06.054923   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:29:06.055056   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:29:06.055091   36600 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:29:06.055110   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:29:06.055337   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:29:06.055356   36600 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:29:06.055458   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:29:06.055586   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:29:06.055776   36600 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:29:06.135564   36600 ssh_runner.go:195] Run: systemctl --version
	I0829 19:29:06.163034   36600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:29:06.322676   36600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:29:06.332884   36600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:29:06.332941   36600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:29:06.341985   36600 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0829 19:29:06.342007   36600 start.go:495] detecting cgroup driver to use...
	I0829 19:29:06.342065   36600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:29:06.358776   36600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:29:06.372459   36600 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:29:06.372514   36600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:29:06.386054   36600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:29:06.400023   36600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:29:06.554184   36600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:29:06.698310   36600 docker.go:233] disabling docker service ...
	I0829 19:29:06.698400   36600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:29:06.718341   36600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:29:06.732086   36600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:29:06.879309   36600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:29:07.027409   36600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:29:07.044803   36600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:29:07.066647   36600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:29:07.066710   36600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:29:07.079187   36600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:29:07.079243   36600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:29:07.091814   36600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:29:07.103966   36600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:29:07.114788   36600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:29:07.125798   36600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:29:07.137276   36600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:29:07.148057   36600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:29:07.158962   36600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:29:07.168481   36600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:29:07.178254   36600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:29:07.323671   36600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:29:07.554373   36600 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:29:07.554441   36600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:29:07.562497   36600 start.go:563] Will wait 60s for crictl version
	I0829 19:29:07.562574   36600 ssh_runner.go:195] Run: which crictl
	I0829 19:29:07.566461   36600 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:29:07.611735   36600 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:29:07.611819   36600 ssh_runner.go:195] Run: crio --version
	I0829 19:29:07.647665   36600 ssh_runner.go:195] Run: crio --version
	I0829 19:29:07.682763   36600 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:29:07.684227   36600 main.go:141] libmachine: (ha-505269) Calling .GetIP
	I0829 19:29:07.687101   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:29:07.687526   36600 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:29:07.687552   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:29:07.687760   36600 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 19:29:07.692546   36600 kubeadm.go:883] updating cluster {Name:ha-505269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-505269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.178 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.101 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:29:07.692719   36600 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:29:07.692769   36600 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:29:07.740649   36600 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:29:07.740670   36600 crio.go:433] Images already preloaded, skipping extraction
	I0829 19:29:07.740713   36600 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:29:07.778175   36600 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:29:07.778204   36600 cache_images.go:84] Images are preloaded, skipping loading
	I0829 19:29:07.778219   36600 kubeadm.go:934] updating node { 192.168.39.56 8443 v1.31.0 crio true true} ...
	I0829 19:29:07.778320   36600 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-505269 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-505269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:29:07.778382   36600 ssh_runner.go:195] Run: crio config
	I0829 19:29:07.837416   36600 cni.go:84] Creating CNI manager for ""
	I0829 19:29:07.837433   36600 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0829 19:29:07.837443   36600 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:29:07.837463   36600 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.56 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-505269 NodeName:ha-505269 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:29:07.837582   36600 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.56
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-505269"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.56
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.56"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:29:07.837605   36600 kube-vip.go:115] generating kube-vip config ...
	I0829 19:29:07.837642   36600 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0829 19:29:07.851620   36600 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0829 19:29:07.851701   36600 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0829 19:29:07.851748   36600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:29:07.863698   36600 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:29:07.863758   36600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0829 19:29:07.874878   36600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0829 19:29:07.893661   36600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:29:07.912030   36600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0829 19:29:07.930716   36600 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0829 19:29:07.949391   36600 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0829 19:29:07.955001   36600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:29:08.115922   36600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:29:08.131455   36600 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269 for IP: 192.168.39.56
	I0829 19:29:08.131477   36600 certs.go:194] generating shared ca certs ...
	I0829 19:29:08.131494   36600 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:29:08.131647   36600 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 19:29:08.131719   36600 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 19:29:08.131733   36600 certs.go:256] generating profile certs ...
	I0829 19:29:08.131827   36600 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.key
	I0829 19:29:08.131861   36600 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.94223113
	I0829 19:29:08.131882   36600 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.94223113 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.56 192.168.39.68 192.168.39.178 192.168.39.254]
	I0829 19:29:08.191691   36600 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.94223113 ...
	I0829 19:29:08.191729   36600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.94223113: {Name:mkabb73b87dfcf8f7a0f84868f46b5d3ff79bef1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:29:08.191916   36600 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.94223113 ...
	I0829 19:29:08.191933   36600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.94223113: {Name:mk3c718e9f0565ec73d9d975ba744e6c0d2fc82e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:29:08.192027   36600 certs.go:381] copying /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.94223113 -> /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt
	I0829 19:29:08.192210   36600 certs.go:385] copying /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.94223113 -> /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key
	I0829 19:29:08.192375   36600 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.key
	I0829 19:29:08.192391   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0829 19:29:08.192409   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0829 19:29:08.192425   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0829 19:29:08.192449   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0829 19:29:08.192469   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0829 19:29:08.192488   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0829 19:29:08.192510   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0829 19:29:08.192528   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0829 19:29:08.192635   36600 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 19:29:08.192685   36600 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 19:29:08.192700   36600 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 19:29:08.192737   36600 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 19:29:08.192771   36600 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:29:08.192801   36600 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 19:29:08.192859   36600 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 19:29:08.192899   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem -> /usr/share/ca-certificates/18361.pem
	I0829 19:29:08.192919   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> /usr/share/ca-certificates/183612.pem
	I0829 19:29:08.192938   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:29:08.193475   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:29:08.219502   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 19:29:08.244853   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:29:08.268655   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:29:08.293993   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0829 19:29:08.318182   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 19:29:08.342186   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:29:08.366997   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:29:08.391509   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 19:29:08.415580   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 19:29:08.440219   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:29:08.464486   36600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:29:08.480957   36600 ssh_runner.go:195] Run: openssl version
	I0829 19:29:08.486815   36600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 19:29:08.497254   36600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 19:29:08.501791   36600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 19:29:08.501850   36600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 19:29:08.507608   36600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 19:29:08.516752   36600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 19:29:08.527128   36600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 19:29:08.531511   36600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 19:29:08.531559   36600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 19:29:08.537309   36600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:29:08.546501   36600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:29:08.558230   36600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:29:08.562860   36600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:29:08.562917   36600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:29:08.568573   36600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:29:08.578091   36600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:29:08.582888   36600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:29:08.588740   36600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:29:08.594682   36600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:29:08.600929   36600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:29:08.606663   36600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:29:08.612797   36600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:29:08.618384   36600 kubeadm.go:392] StartCluster: {Name:ha-505269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-505269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.178 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.101 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:29:08.618492   36600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:29:08.618555   36600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:29:08.655913   36600 cri.go:89] found id: "f00fd6c41ac07df2a221d4331e2b3dccd2e358c7d1676f6f7bdd1ed7b4cf5d4b"
	I0829 19:29:08.655932   36600 cri.go:89] found id: "f85180dc495341fe79bbb2234d464d91f1685b07ea2bee68a830140b5cd771fa"
	I0829 19:29:08.655936   36600 cri.go:89] found id: "32266f173333a67786c0fd1a3c4a4730db0b274c0b431830328de92ff6bd09b8"
	I0829 19:29:08.655938   36600 cri.go:89] found id: "7c1f1381650d141915ee7ad8a0e9d8363da549a9fdbb60a03bda17e169672eb1"
	I0829 19:29:08.655941   36600 cri.go:89] found id: "29d7e6c72fdaaad5da65548ec44c18b60db3921c4ad2b63c9d767f1cc2c3fa75"
	I0829 19:29:08.655944   36600 cri.go:89] found id: "1bc1a33f68ce7f972b7d1dd4dc36ce496d6339b1dc6a351d35cbb255ed61e8bd"
	I0829 19:29:08.655946   36600 cri.go:89] found id: "f5e9dd792be09a6e1909afd5dca42b303bac8476ef04f3254480b0f21ac53604"
	I0829 19:29:08.655949   36600 cri.go:89] found id: "9b0cc96d9477c950d00d6920b1c4466bfd2e75dc0980383cdf073b75f577c37c"
	I0829 19:29:08.655951   36600 cri.go:89] found id: "066b1cbdd3861a8cf28333b764086e47ace1987acec778ff2b2d0aa1973af37a"
	I0829 19:29:08.655956   36600 cri.go:89] found id: "52fd2d668a925e55c334400b5d61dde55232d771983a9073e97f0ff37a6062fc"
	I0829 19:29:08.655959   36600 cri.go:89] found id: "960e616b3c0582c8ccb0ec4fea8d02c0e2bfd2d0f81a273ac203cbf5eb6e4d6c"
	I0829 19:29:08.655975   36600 cri.go:89] found id: "d1f91ce133bedd34081facf255eed45036eacfd938fbf58545d302a01d638dc0"
	I0829 19:29:08.655980   36600 cri.go:89] found id: "65b2531e5990db96ea59b71af6b9036995e91f182477ba5d381b96877d38e296"
	I0829 19:29:08.655983   36600 cri.go:89] found id: ""
	I0829 19:29:08.656032   36600 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 29 19:32:05 ha-505269 crio[3719]: time="2024-08-29 19:32:05.098913678Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959925098889018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e6efca3-0cfd-4661-b734-8127d1ca92d4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:32:05 ha-505269 crio[3719]: time="2024-08-29 19:32:05.099567791Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a5abb8ce-a9d9-4d22-b082-39be6597aa31 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:32:05 ha-505269 crio[3719]: time="2024-08-29 19:32:05.099666497Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a5abb8ce-a9d9-4d22-b082-39be6597aa31 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:32:05 ha-505269 crio[3719]: time="2024-08-29 19:32:05.100261564Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d24eca9daeddb01201f9b34221347d5cb309931c118348968da060d6c1bf376c,PodSandboxId:2ba432bbcf20f2894079e374a21869e4c3b459312f9dfc9aa495e6ee0daa8237,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724959819364656134,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840e9d9d59afee1514ac6551d154c955,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b090b02a2f6af71741e9e17cccfeb77b790febb24ce1fb5ec191d971424b5378,PodSandboxId:2efeb7211db1abde8ccda092af846f805b61900f51b39ffc9a91cd168b949318,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724959798361283228,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82b78fe84e206c02eb995f9d886b23c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01f810a9780487cf942a0a39907de8a94f02a13ddcaa83a795f9632463c9f407,PodSandboxId:4330e2b71d9ee5233683cbf4c7d7de37788b9165bd248254bd022ecd32a0430d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724959785684229787,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-psss7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69c11597-6cac-437a-9860-fc1a66cdc304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination
-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39bb7e864296eb98a7fa1d4cf9729fd6fe8a035d7acdcb8024f1172b92b20424,PodSandboxId:2ba432bbcf20f2894079e374a21869e4c3b459312f9dfc9aa495e6ee0daa8237,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724959785014940911,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840e9d9d59afee1514ac6551d154c955,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c3c9479f487a0ae9282327db1e1b6fa1a3f31f7ef4354f10083d6f93334aa13,PodSandboxId:3506c46b68a6dd7a917e5603d36b8df5d715969c33b218c84522da041fa4ba35,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724959768134530987,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf717028309b4303be9380ab50d8f89a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de469f77c1414c9aeb375f6fab95f9bf73d297cf39f8e3032d44a27fce129160,PodSandboxId:d2d78cd683b75429a8fb19a8d336313277432fb28b10e5755db7a3f0044414b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724959753005180580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7cd00a-94da-4e42-b7ae-289aab759c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:392a8fdf39b58ef48526e9b0ee737518780d384adbf9a00f6d58e386f06aca86,PodSandboxId:feea7addc66c506d27e991c62c20bda17a854abc868774c641d9459da7c0a1f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959752578046557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bqqq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801d9cfa-e1ad-4b31-9803-0030543fdc9e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"
containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3f6edccac07af4c68e2df70de1901a354b7e466a6480aa00a8c242372a1489,PodSandboxId:01947ff9720b96eec71aef069d1e00630167d535f0d83a9d399723674f8fc2e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724959752426178642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2658d81c7919220d900309ffd29970c4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ab99da1b1a7e8939462b754993d6616d0a80dc4262fe6f7d4d69c339c4a78c,PodSandboxId:d20310623d52ab5052b16cdf33028f557b5a6c2baf8a0a27a7a866b18ff8cace,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959752399739719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qjgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12168097-2d3c-467a-b4b5-c0ca7f85e4eb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d575c067854c198c0503704ca102d4bb9e9823a228df479b19a5ef172e162fc8,PodSandboxId:d42fe0859024616c82f93e163ef976e1501b5a23cca570ea6584dd321bdb15a1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724959752418561136,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7rp6z,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 7c922b32-e666-4b00-ab65-505632346112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46b7c110f2f7341eb54db4c08f1a782899e9205e069ca96dfbf295e38d3fc601,PodSandboxId:1cd9911d50578a09d9516f4ec4c519c25999741911a9df06637349a98a8b5dac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724959752194158517,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx822,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: e88a504e-122b-4609-a0cc-4ad3115b3e4e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48ea370e88590ef75080636cd247905edf96c5c382c3fee2a7cf22b8c428407c,PodSandboxId:2efeb7211db1abde8ccda092af846f805b61900f51b39ffc9a91cd168b949318,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724959752332592108,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82b78
fe84e206c02eb995f9d886b23c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee8547993a84026cfd6c27f9f068228b1ea619a9dee19e99e3b8d07b22b23584,PodSandboxId:60334f5771834dbdd2d31990ab7c7f4f3d2b23cb4846dd94949e33ab8f9dd2d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724959752074925751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceba94f2170a08ee5a3d92beb3c
9ffca,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed600112468d9762d0ad5c0e554ca486c5cfc592114271d25cd84f5c09187db,PodSandboxId:02692297ba9f1af58d950a27f62a82af78a2a057750241400caef23a8bf2b2b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724959256520864398,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-psss7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69c11597-6cac-437a-9860-fc1a66cdc
304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29d7e6c72fdaaad5da65548ec44c18b60db3921c4ad2b63c9d767f1cc2c3fa75,PodSandboxId:f43b8211e2c7357fd7c4a4db43182c72f928a273075ca6fe8b80aedc84c67fac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959112076742735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qjgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12168097-2d3c-467a-b4b5-c0ca7f85e4eb,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc1a33f68ce7f972b7d1dd4dc36ce496d6339b1dc6a351d35cbb255ed61e8bd,PodSandboxId:6b5276c7cbe294143915bdc30d76458310de02fd916991c7782efc2e33f3190b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959112071270130,Labels:map[string]string{io.kubernetes.containe
r.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bqqq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801d9cfa-e1ad-4b31-9803-0030543fdc9e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5e9dd792be09a6e1909afd5dca42b303bac8476ef04f3254480b0f21ac53604,PodSandboxId:1f6e4f500f959ede186c4a9f85bff38a45872b813a20d08db40395e528d840a1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724959100077712568,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7rp6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c922b32-e666-4b00-ab65-505632346112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0cc96d9477c950d00d6920b1c4466bfd2e75dc0980383cdf073b75f577c37c,PodSandboxId:303aabbca3328ef64356c8322b3e76f6f3d1d35af5f892a4bec46277b7c9dd3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724959097303124595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88a504e-122b-4609-a0cc-4ad3115b3e4e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52fd2d668a925e55c334400b5d61dde55232d771983a9073e97f0ff37a6062fc,PodSandboxId:9d45b7206f46e64c0d5913c20a97148e0ac19d155b7ce7d3e371c593e763c4d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913f
c06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724959085808573312,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2658d81c7919220d900309ffd29970c4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1f91ce133bedd34081facf255eed45036eacfd938fbf58545d302a01d638dc0,PodSandboxId:8cfa0a246feb47d168af1872d1544f0b93affe0730c14516b4367b835d78d328,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f72
9c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724959085673137503,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceba94f2170a08ee5a3d92beb3c9ffca,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a5abb8ce-a9d9-4d22-b082-39be6597aa31 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:32:05 ha-505269 crio[3719]: time="2024-08-29 19:32:05.146603750Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c7abfa0b-0890-4820-8426-e5e62b12460e name=/runtime.v1.RuntimeService/Version
	Aug 29 19:32:05 ha-505269 crio[3719]: time="2024-08-29 19:32:05.146703677Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c7abfa0b-0890-4820-8426-e5e62b12460e name=/runtime.v1.RuntimeService/Version
	Aug 29 19:32:05 ha-505269 crio[3719]: time="2024-08-29 19:32:05.147811179Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e365ef81-d7e7-4e84-a438-bb8d9e619f85 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:32:05 ha-505269 crio[3719]: time="2024-08-29 19:32:05.148374638Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959925148351765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e365ef81-d7e7-4e84-a438-bb8d9e619f85 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:32:05 ha-505269 crio[3719]: time="2024-08-29 19:32:05.149180359Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3eb81e68-0a8d-4104-bb93-6bcd3609c42c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:32:05 ha-505269 crio[3719]: time="2024-08-29 19:32:05.149251977Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3eb81e68-0a8d-4104-bb93-6bcd3609c42c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:32:05 ha-505269 crio[3719]: time="2024-08-29 19:32:05.149632894Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d24eca9daeddb01201f9b34221347d5cb309931c118348968da060d6c1bf376c,PodSandboxId:2ba432bbcf20f2894079e374a21869e4c3b459312f9dfc9aa495e6ee0daa8237,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724959819364656134,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840e9d9d59afee1514ac6551d154c955,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b090b02a2f6af71741e9e17cccfeb77b790febb24ce1fb5ec191d971424b5378,PodSandboxId:2efeb7211db1abde8ccda092af846f805b61900f51b39ffc9a91cd168b949318,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724959798361283228,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82b78fe84e206c02eb995f9d886b23c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01f810a9780487cf942a0a39907de8a94f02a13ddcaa83a795f9632463c9f407,PodSandboxId:4330e2b71d9ee5233683cbf4c7d7de37788b9165bd248254bd022ecd32a0430d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724959785684229787,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-psss7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69c11597-6cac-437a-9860-fc1a66cdc304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination
-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39bb7e864296eb98a7fa1d4cf9729fd6fe8a035d7acdcb8024f1172b92b20424,PodSandboxId:2ba432bbcf20f2894079e374a21869e4c3b459312f9dfc9aa495e6ee0daa8237,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724959785014940911,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840e9d9d59afee1514ac6551d154c955,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c3c9479f487a0ae9282327db1e1b6fa1a3f31f7ef4354f10083d6f93334aa13,PodSandboxId:3506c46b68a6dd7a917e5603d36b8df5d715969c33b218c84522da041fa4ba35,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724959768134530987,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf717028309b4303be9380ab50d8f89a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de469f77c1414c9aeb375f6fab95f9bf73d297cf39f8e3032d44a27fce129160,PodSandboxId:d2d78cd683b75429a8fb19a8d336313277432fb28b10e5755db7a3f0044414b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724959753005180580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7cd00a-94da-4e42-b7ae-289aab759c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:392a8fdf39b58ef48526e9b0ee737518780d384adbf9a00f6d58e386f06aca86,PodSandboxId:feea7addc66c506d27e991c62c20bda17a854abc868774c641d9459da7c0a1f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959752578046557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bqqq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801d9cfa-e1ad-4b31-9803-0030543fdc9e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"
containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3f6edccac07af4c68e2df70de1901a354b7e466a6480aa00a8c242372a1489,PodSandboxId:01947ff9720b96eec71aef069d1e00630167d535f0d83a9d399723674f8fc2e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724959752426178642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2658d81c7919220d900309ffd29970c4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ab99da1b1a7e8939462b754993d6616d0a80dc4262fe6f7d4d69c339c4a78c,PodSandboxId:d20310623d52ab5052b16cdf33028f557b5a6c2baf8a0a27a7a866b18ff8cace,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959752399739719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qjgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12168097-2d3c-467a-b4b5-c0ca7f85e4eb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d575c067854c198c0503704ca102d4bb9e9823a228df479b19a5ef172e162fc8,PodSandboxId:d42fe0859024616c82f93e163ef976e1501b5a23cca570ea6584dd321bdb15a1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724959752418561136,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7rp6z,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 7c922b32-e666-4b00-ab65-505632346112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46b7c110f2f7341eb54db4c08f1a782899e9205e069ca96dfbf295e38d3fc601,PodSandboxId:1cd9911d50578a09d9516f4ec4c519c25999741911a9df06637349a98a8b5dac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724959752194158517,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx822,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: e88a504e-122b-4609-a0cc-4ad3115b3e4e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48ea370e88590ef75080636cd247905edf96c5c382c3fee2a7cf22b8c428407c,PodSandboxId:2efeb7211db1abde8ccda092af846f805b61900f51b39ffc9a91cd168b949318,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724959752332592108,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82b78
fe84e206c02eb995f9d886b23c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee8547993a84026cfd6c27f9f068228b1ea619a9dee19e99e3b8d07b22b23584,PodSandboxId:60334f5771834dbdd2d31990ab7c7f4f3d2b23cb4846dd94949e33ab8f9dd2d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724959752074925751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceba94f2170a08ee5a3d92beb3c
9ffca,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed600112468d9762d0ad5c0e554ca486c5cfc592114271d25cd84f5c09187db,PodSandboxId:02692297ba9f1af58d950a27f62a82af78a2a057750241400caef23a8bf2b2b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724959256520864398,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-psss7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69c11597-6cac-437a-9860-fc1a66cdc
304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29d7e6c72fdaaad5da65548ec44c18b60db3921c4ad2b63c9d767f1cc2c3fa75,PodSandboxId:f43b8211e2c7357fd7c4a4db43182c72f928a273075ca6fe8b80aedc84c67fac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959112076742735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qjgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12168097-2d3c-467a-b4b5-c0ca7f85e4eb,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc1a33f68ce7f972b7d1dd4dc36ce496d6339b1dc6a351d35cbb255ed61e8bd,PodSandboxId:6b5276c7cbe294143915bdc30d76458310de02fd916991c7782efc2e33f3190b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959112071270130,Labels:map[string]string{io.kubernetes.containe
r.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bqqq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801d9cfa-e1ad-4b31-9803-0030543fdc9e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5e9dd792be09a6e1909afd5dca42b303bac8476ef04f3254480b0f21ac53604,PodSandboxId:1f6e4f500f959ede186c4a9f85bff38a45872b813a20d08db40395e528d840a1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724959100077712568,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7rp6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c922b32-e666-4b00-ab65-505632346112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0cc96d9477c950d00d6920b1c4466bfd2e75dc0980383cdf073b75f577c37c,PodSandboxId:303aabbca3328ef64356c8322b3e76f6f3d1d35af5f892a4bec46277b7c9dd3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724959097303124595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88a504e-122b-4609-a0cc-4ad3115b3e4e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52fd2d668a925e55c334400b5d61dde55232d771983a9073e97f0ff37a6062fc,PodSandboxId:9d45b7206f46e64c0d5913c20a97148e0ac19d155b7ce7d3e371c593e763c4d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913f
c06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724959085808573312,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2658d81c7919220d900309ffd29970c4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1f91ce133bedd34081facf255eed45036eacfd938fbf58545d302a01d638dc0,PodSandboxId:8cfa0a246feb47d168af1872d1544f0b93affe0730c14516b4367b835d78d328,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f72
9c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724959085673137503,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceba94f2170a08ee5a3d92beb3c9ffca,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3eb81e68-0a8d-4104-bb93-6bcd3609c42c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:32:05 ha-505269 crio[3719]: time="2024-08-29 19:32:05.193035078Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=02b603c7-4b6c-4804-8d53-1ce6876949ec name=/runtime.v1.RuntimeService/Version
	Aug 29 19:32:05 ha-505269 crio[3719]: time="2024-08-29 19:32:05.193134076Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=02b603c7-4b6c-4804-8d53-1ce6876949ec name=/runtime.v1.RuntimeService/Version
	Aug 29 19:32:05 ha-505269 crio[3719]: time="2024-08-29 19:32:05.195904444Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7f2bd235-e046-444b-935d-b8f3de3f5523 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:32:05 ha-505269 crio[3719]: time="2024-08-29 19:32:05.196432109Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959925196403503,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f2bd235-e046-444b-935d-b8f3de3f5523 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:32:05 ha-505269 crio[3719]: time="2024-08-29 19:32:05.197088706Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=307d2739-214c-459d-823a-29b42b7f422e name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:32:05 ha-505269 crio[3719]: time="2024-08-29 19:32:05.197159369Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=307d2739-214c-459d-823a-29b42b7f422e name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:32:05 ha-505269 crio[3719]: time="2024-08-29 19:32:05.197586280Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d24eca9daeddb01201f9b34221347d5cb309931c118348968da060d6c1bf376c,PodSandboxId:2ba432bbcf20f2894079e374a21869e4c3b459312f9dfc9aa495e6ee0daa8237,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724959819364656134,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840e9d9d59afee1514ac6551d154c955,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b090b02a2f6af71741e9e17cccfeb77b790febb24ce1fb5ec191d971424b5378,PodSandboxId:2efeb7211db1abde8ccda092af846f805b61900f51b39ffc9a91cd168b949318,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724959798361283228,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82b78fe84e206c02eb995f9d886b23c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01f810a9780487cf942a0a39907de8a94f02a13ddcaa83a795f9632463c9f407,PodSandboxId:4330e2b71d9ee5233683cbf4c7d7de37788b9165bd248254bd022ecd32a0430d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724959785684229787,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-psss7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69c11597-6cac-437a-9860-fc1a66cdc304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination
-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39bb7e864296eb98a7fa1d4cf9729fd6fe8a035d7acdcb8024f1172b92b20424,PodSandboxId:2ba432bbcf20f2894079e374a21869e4c3b459312f9dfc9aa495e6ee0daa8237,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724959785014940911,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840e9d9d59afee1514ac6551d154c955,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c3c9479f487a0ae9282327db1e1b6fa1a3f31f7ef4354f10083d6f93334aa13,PodSandboxId:3506c46b68a6dd7a917e5603d36b8df5d715969c33b218c84522da041fa4ba35,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724959768134530987,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf717028309b4303be9380ab50d8f89a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de469f77c1414c9aeb375f6fab95f9bf73d297cf39f8e3032d44a27fce129160,PodSandboxId:d2d78cd683b75429a8fb19a8d336313277432fb28b10e5755db7a3f0044414b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724959753005180580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7cd00a-94da-4e42-b7ae-289aab759c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:392a8fdf39b58ef48526e9b0ee737518780d384adbf9a00f6d58e386f06aca86,PodSandboxId:feea7addc66c506d27e991c62c20bda17a854abc868774c641d9459da7c0a1f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959752578046557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bqqq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801d9cfa-e1ad-4b31-9803-0030543fdc9e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"
containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3f6edccac07af4c68e2df70de1901a354b7e466a6480aa00a8c242372a1489,PodSandboxId:01947ff9720b96eec71aef069d1e00630167d535f0d83a9d399723674f8fc2e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724959752426178642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2658d81c7919220d900309ffd29970c4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ab99da1b1a7e8939462b754993d6616d0a80dc4262fe6f7d4d69c339c4a78c,PodSandboxId:d20310623d52ab5052b16cdf33028f557b5a6c2baf8a0a27a7a866b18ff8cace,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959752399739719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qjgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12168097-2d3c-467a-b4b5-c0ca7f85e4eb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d575c067854c198c0503704ca102d4bb9e9823a228df479b19a5ef172e162fc8,PodSandboxId:d42fe0859024616c82f93e163ef976e1501b5a23cca570ea6584dd321bdb15a1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724959752418561136,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7rp6z,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 7c922b32-e666-4b00-ab65-505632346112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46b7c110f2f7341eb54db4c08f1a782899e9205e069ca96dfbf295e38d3fc601,PodSandboxId:1cd9911d50578a09d9516f4ec4c519c25999741911a9df06637349a98a8b5dac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724959752194158517,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx822,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: e88a504e-122b-4609-a0cc-4ad3115b3e4e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48ea370e88590ef75080636cd247905edf96c5c382c3fee2a7cf22b8c428407c,PodSandboxId:2efeb7211db1abde8ccda092af846f805b61900f51b39ffc9a91cd168b949318,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724959752332592108,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82b78
fe84e206c02eb995f9d886b23c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee8547993a84026cfd6c27f9f068228b1ea619a9dee19e99e3b8d07b22b23584,PodSandboxId:60334f5771834dbdd2d31990ab7c7f4f3d2b23cb4846dd94949e33ab8f9dd2d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724959752074925751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceba94f2170a08ee5a3d92beb3c
9ffca,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed600112468d9762d0ad5c0e554ca486c5cfc592114271d25cd84f5c09187db,PodSandboxId:02692297ba9f1af58d950a27f62a82af78a2a057750241400caef23a8bf2b2b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724959256520864398,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-psss7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69c11597-6cac-437a-9860-fc1a66cdc
304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29d7e6c72fdaaad5da65548ec44c18b60db3921c4ad2b63c9d767f1cc2c3fa75,PodSandboxId:f43b8211e2c7357fd7c4a4db43182c72f928a273075ca6fe8b80aedc84c67fac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959112076742735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qjgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12168097-2d3c-467a-b4b5-c0ca7f85e4eb,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc1a33f68ce7f972b7d1dd4dc36ce496d6339b1dc6a351d35cbb255ed61e8bd,PodSandboxId:6b5276c7cbe294143915bdc30d76458310de02fd916991c7782efc2e33f3190b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959112071270130,Labels:map[string]string{io.kubernetes.containe
r.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bqqq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801d9cfa-e1ad-4b31-9803-0030543fdc9e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5e9dd792be09a6e1909afd5dca42b303bac8476ef04f3254480b0f21ac53604,PodSandboxId:1f6e4f500f959ede186c4a9f85bff38a45872b813a20d08db40395e528d840a1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724959100077712568,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7rp6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c922b32-e666-4b00-ab65-505632346112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0cc96d9477c950d00d6920b1c4466bfd2e75dc0980383cdf073b75f577c37c,PodSandboxId:303aabbca3328ef64356c8322b3e76f6f3d1d35af5f892a4bec46277b7c9dd3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724959097303124595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88a504e-122b-4609-a0cc-4ad3115b3e4e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52fd2d668a925e55c334400b5d61dde55232d771983a9073e97f0ff37a6062fc,PodSandboxId:9d45b7206f46e64c0d5913c20a97148e0ac19d155b7ce7d3e371c593e763c4d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913f
c06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724959085808573312,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2658d81c7919220d900309ffd29970c4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1f91ce133bedd34081facf255eed45036eacfd938fbf58545d302a01d638dc0,PodSandboxId:8cfa0a246feb47d168af1872d1544f0b93affe0730c14516b4367b835d78d328,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f72
9c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724959085673137503,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceba94f2170a08ee5a3d92beb3c9ffca,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=307d2739-214c-459d-823a-29b42b7f422e name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:32:05 ha-505269 crio[3719]: time="2024-08-29 19:32:05.242233122Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d5a83a76-b2d5-493a-bbbe-f625320651bd name=/runtime.v1.RuntimeService/Version
	Aug 29 19:32:05 ha-505269 crio[3719]: time="2024-08-29 19:32:05.242317493Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d5a83a76-b2d5-493a-bbbe-f625320651bd name=/runtime.v1.RuntimeService/Version
	Aug 29 19:32:05 ha-505269 crio[3719]: time="2024-08-29 19:32:05.243517594Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5ce5e841-6f97-4677-bc02-0953eb626153 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:32:05 ha-505269 crio[3719]: time="2024-08-29 19:32:05.244045482Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959925244021860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ce5e841-6f97-4677-bc02-0953eb626153 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:32:05 ha-505269 crio[3719]: time="2024-08-29 19:32:05.245293609Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=93e69a3c-d74f-4483-9abc-1562723ae400 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:32:05 ha-505269 crio[3719]: time="2024-08-29 19:32:05.245373758Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=93e69a3c-d74f-4483-9abc-1562723ae400 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:32:05 ha-505269 crio[3719]: time="2024-08-29 19:32:05.246242888Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d24eca9daeddb01201f9b34221347d5cb309931c118348968da060d6c1bf376c,PodSandboxId:2ba432bbcf20f2894079e374a21869e4c3b459312f9dfc9aa495e6ee0daa8237,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724959819364656134,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840e9d9d59afee1514ac6551d154c955,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b090b02a2f6af71741e9e17cccfeb77b790febb24ce1fb5ec191d971424b5378,PodSandboxId:2efeb7211db1abde8ccda092af846f805b61900f51b39ffc9a91cd168b949318,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724959798361283228,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82b78fe84e206c02eb995f9d886b23c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01f810a9780487cf942a0a39907de8a94f02a13ddcaa83a795f9632463c9f407,PodSandboxId:4330e2b71d9ee5233683cbf4c7d7de37788b9165bd248254bd022ecd32a0430d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724959785684229787,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-psss7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69c11597-6cac-437a-9860-fc1a66cdc304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination
-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39bb7e864296eb98a7fa1d4cf9729fd6fe8a035d7acdcb8024f1172b92b20424,PodSandboxId:2ba432bbcf20f2894079e374a21869e4c3b459312f9dfc9aa495e6ee0daa8237,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724959785014940911,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840e9d9d59afee1514ac6551d154c955,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c3c9479f487a0ae9282327db1e1b6fa1a3f31f7ef4354f10083d6f93334aa13,PodSandboxId:3506c46b68a6dd7a917e5603d36b8df5d715969c33b218c84522da041fa4ba35,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724959768134530987,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf717028309b4303be9380ab50d8f89a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de469f77c1414c9aeb375f6fab95f9bf73d297cf39f8e3032d44a27fce129160,PodSandboxId:d2d78cd683b75429a8fb19a8d336313277432fb28b10e5755db7a3f0044414b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724959753005180580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7cd00a-94da-4e42-b7ae-289aab759c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:392a8fdf39b58ef48526e9b0ee737518780d384adbf9a00f6d58e386f06aca86,PodSandboxId:feea7addc66c506d27e991c62c20bda17a854abc868774c641d9459da7c0a1f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959752578046557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bqqq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801d9cfa-e1ad-4b31-9803-0030543fdc9e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"
containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3f6edccac07af4c68e2df70de1901a354b7e466a6480aa00a8c242372a1489,PodSandboxId:01947ff9720b96eec71aef069d1e00630167d535f0d83a9d399723674f8fc2e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724959752426178642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2658d81c7919220d900309ffd29970c4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ab99da1b1a7e8939462b754993d6616d0a80dc4262fe6f7d4d69c339c4a78c,PodSandboxId:d20310623d52ab5052b16cdf33028f557b5a6c2baf8a0a27a7a866b18ff8cace,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959752399739719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qjgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12168097-2d3c-467a-b4b5-c0ca7f85e4eb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d575c067854c198c0503704ca102d4bb9e9823a228df479b19a5ef172e162fc8,PodSandboxId:d42fe0859024616c82f93e163ef976e1501b5a23cca570ea6584dd321bdb15a1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724959752418561136,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7rp6z,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 7c922b32-e666-4b00-ab65-505632346112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46b7c110f2f7341eb54db4c08f1a782899e9205e069ca96dfbf295e38d3fc601,PodSandboxId:1cd9911d50578a09d9516f4ec4c519c25999741911a9df06637349a98a8b5dac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724959752194158517,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx822,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: e88a504e-122b-4609-a0cc-4ad3115b3e4e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48ea370e88590ef75080636cd247905edf96c5c382c3fee2a7cf22b8c428407c,PodSandboxId:2efeb7211db1abde8ccda092af846f805b61900f51b39ffc9a91cd168b949318,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724959752332592108,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82b78
fe84e206c02eb995f9d886b23c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee8547993a84026cfd6c27f9f068228b1ea619a9dee19e99e3b8d07b22b23584,PodSandboxId:60334f5771834dbdd2d31990ab7c7f4f3d2b23cb4846dd94949e33ab8f9dd2d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724959752074925751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceba94f2170a08ee5a3d92beb3c
9ffca,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed600112468d9762d0ad5c0e554ca486c5cfc592114271d25cd84f5c09187db,PodSandboxId:02692297ba9f1af58d950a27f62a82af78a2a057750241400caef23a8bf2b2b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724959256520864398,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-psss7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69c11597-6cac-437a-9860-fc1a66cdc
304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29d7e6c72fdaaad5da65548ec44c18b60db3921c4ad2b63c9d767f1cc2c3fa75,PodSandboxId:f43b8211e2c7357fd7c4a4db43182c72f928a273075ca6fe8b80aedc84c67fac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959112076742735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qjgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12168097-2d3c-467a-b4b5-c0ca7f85e4eb,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc1a33f68ce7f972b7d1dd4dc36ce496d6339b1dc6a351d35cbb255ed61e8bd,PodSandboxId:6b5276c7cbe294143915bdc30d76458310de02fd916991c7782efc2e33f3190b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959112071270130,Labels:map[string]string{io.kubernetes.containe
r.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bqqq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801d9cfa-e1ad-4b31-9803-0030543fdc9e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5e9dd792be09a6e1909afd5dca42b303bac8476ef04f3254480b0f21ac53604,PodSandboxId:1f6e4f500f959ede186c4a9f85bff38a45872b813a20d08db40395e528d840a1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724959100077712568,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7rp6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c922b32-e666-4b00-ab65-505632346112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0cc96d9477c950d00d6920b1c4466bfd2e75dc0980383cdf073b75f577c37c,PodSandboxId:303aabbca3328ef64356c8322b3e76f6f3d1d35af5f892a4bec46277b7c9dd3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724959097303124595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88a504e-122b-4609-a0cc-4ad3115b3e4e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52fd2d668a925e55c334400b5d61dde55232d771983a9073e97f0ff37a6062fc,PodSandboxId:9d45b7206f46e64c0d5913c20a97148e0ac19d155b7ce7d3e371c593e763c4d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913f
c06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724959085808573312,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2658d81c7919220d900309ffd29970c4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1f91ce133bedd34081facf255eed45036eacfd938fbf58545d302a01d638dc0,PodSandboxId:8cfa0a246feb47d168af1872d1544f0b93affe0730c14516b4367b835d78d328,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f72
9c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724959085673137503,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceba94f2170a08ee5a3d92beb3c9ffca,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=93e69a3c-d74f-4483-9abc-1562723ae400 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d24eca9daeddb       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   3                   2ba432bbcf20f       kube-controller-manager-ha-505269
	b090b02a2f6af       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      2 minutes ago        Running             kube-apiserver            3                   2efeb7211db1a       kube-apiserver-ha-505269
	01f810a978048       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   4330e2b71d9ee       busybox-7dff88458-psss7
	39bb7e864296e       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      2 minutes ago        Exited              kube-controller-manager   2                   2ba432bbcf20f       kube-controller-manager-ha-505269
	0c3c9479f487a       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   3506c46b68a6d       kube-vip-ha-505269
	de469f77c1414       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       5                   d2d78cd683b75       storage-provisioner
	392a8fdf39b58       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   feea7addc66c5       coredns-6f6b679f8f-bqqq5
	ed3f6edccac07       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   01947ff9720b9       etcd-ha-505269
	d575c067854c1       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   d42fe08590246       kindnet-7rp6z
	14ab99da1b1a7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   d20310623d52a       coredns-6f6b679f8f-qjgfg
	48ea370e88590       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      2 minutes ago        Exited              kube-apiserver            2                   2efeb7211db1a       kube-apiserver-ha-505269
	46b7c110f2f73       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      2 minutes ago        Running             kube-proxy                1                   1cd9911d50578       kube-proxy-hx822
	ee8547993a840       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      2 minutes ago        Running             kube-scheduler            1                   60334f5771834       kube-scheduler-ha-505269
	7ed600112468d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   02692297ba9f1       busybox-7dff88458-psss7
	29d7e6c72fdaa       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   f43b8211e2c73       coredns-6f6b679f8f-qjgfg
	1bc1a33f68ce7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   6b5276c7cbe29       coredns-6f6b679f8f-bqqq5
	f5e9dd792be09       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    13 minutes ago       Exited              kindnet-cni               0                   1f6e4f500f959       kindnet-7rp6z
	9b0cc96d9477c       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago       Exited              kube-proxy                0                   303aabbca3328       kube-proxy-hx822
	52fd2d668a925       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago       Exited              etcd                      0                   9d45b7206f46e       etcd-ha-505269
	d1f91ce133bed       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago       Exited              kube-scheduler            0                   8cfa0a246feb4       kube-scheduler-ha-505269
	
	
	==> coredns [14ab99da1b1a7e8939462b754993d6616d0a80dc4262fe6f7d4d69c339c4a78c] <==
	Trace[1366078645]: [10.001089474s] [10.001089474s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [1bc1a33f68ce7f972b7d1dd4dc36ce496d6339b1dc6a351d35cbb255ed61e8bd] <==
	[INFO] 10.244.0.4:49913 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000073523s
	[INFO] 10.244.0.4:48970 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00128236s
	[INFO] 10.244.0.4:55431 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000054415s
	[INFO] 10.244.0.4:54011 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000100096s
	[INFO] 10.244.0.4:57804 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008517s
	[INFO] 10.244.2.2:41131 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117965s
	[INFO] 10.244.1.2:45186 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106338s
	[INFO] 10.244.1.2:55754 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090657s
	[INFO] 10.244.0.4:56674 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071901s
	[INFO] 10.244.2.2:38366 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108693s
	[INFO] 10.244.2.2:46323 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000210179s
	[INFO] 10.244.1.2:45861 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134603s
	[INFO] 10.244.1.2:56113 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000085692s
	[INFO] 10.244.1.2:56364 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124593s
	[INFO] 10.244.1.2:47826 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121887s
	[INFO] 10.244.0.4:45102 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000150401s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1902&timeout=5m33s&timeoutSeconds=333&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1902": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1902": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1901&timeout=8m42s&timeoutSeconds=522&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1902": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1902": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1902&timeout=7m53s&timeoutSeconds=473&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [29d7e6c72fdaaad5da65548ec44c18b60db3921c4ad2b63c9d767f1cc2c3fa75] <==
	[INFO] 10.244.1.2:43142 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092944s
	[INFO] 10.244.1.2:53648 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107032s
	[INFO] 10.244.0.4:57451 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001845348s
	[INFO] 10.244.2.2:52124 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000171457s
	[INFO] 10.244.2.2:35561 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076513s
	[INFO] 10.244.2.2:43265 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081638s
	[INFO] 10.244.1.2:37225 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147344s
	[INFO] 10.244.1.2:48252 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148007s
	[INFO] 10.244.0.4:60295 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013086s
	[INFO] 10.244.0.4:48577 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072897s
	[INFO] 10.244.0.4:48965 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087209s
	[INFO] 10.244.2.2:54597 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00016109s
	[INFO] 10.244.2.2:38187 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000150915s
	[INFO] 10.244.0.4:36462 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093452s
	[INFO] 10.244.0.4:43748 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071292s
	[INFO] 10.244.0.4:55783 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000059972s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1889&timeout=9m4s&timeoutSeconds=544&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1901&timeout=9m58s&timeoutSeconds=598&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [392a8fdf39b58ef48526e9b0ee737518780d384adbf9a00f6d58e386f06aca86] <==
	Trace[883420597]: [10.545522632s] [10.545522632s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:33790->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:51966->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-505269
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-505269
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=ha-505269
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T19_18_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:18:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-505269
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:32:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:30:05 +0000   Thu, 29 Aug 2024 19:18:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:30:05 +0000   Thu, 29 Aug 2024 19:18:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:30:05 +0000   Thu, 29 Aug 2024 19:18:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:30:05 +0000   Thu, 29 Aug 2024 19:18:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.56
	  Hostname:    ha-505269
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fddeecce7ac74aa7bff3cef388a156b1
	  System UUID:                fddeecce-7ac7-4aa7-bff3-cef388a156b1
	  Boot ID:                    1446f3e5-6319-4e2f-82e2-8ba9409f038f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-psss7              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-6f6b679f8f-bqqq5             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-6f6b679f8f-qjgfg             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-505269                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-7rp6z                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-505269             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-505269    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-hx822                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-505269             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-505269                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m9s                   kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-505269 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-505269 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-505269 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-505269 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-505269 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-505269 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           13m                    node-controller  Node ha-505269 event: Registered Node ha-505269 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-505269 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-505269 event: Registered Node ha-505269 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-505269 event: Registered Node ha-505269 in Controller
	  Warning  ContainerGCFailed        3m51s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             3m13s (x2 over 3m38s)  kubelet          Node ha-505269 status is now: NodeNotReady
	  Normal   RegisteredNode           2m8s                   node-controller  Node ha-505269 event: Registered Node ha-505269 in Controller
	  Normal   RegisteredNode           104s                   node-controller  Node ha-505269 event: Registered Node ha-505269 in Controller
	  Normal   RegisteredNode           39s                    node-controller  Node ha-505269 event: Registered Node ha-505269 in Controller
	
	
	Name:               ha-505269-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-505269-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=ha-505269
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_29T19_19_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:19:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-505269-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:32:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:30:31 +0000   Thu, 29 Aug 2024 19:30:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:30:31 +0000   Thu, 29 Aug 2024 19:30:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:30:31 +0000   Thu, 29 Aug 2024 19:30:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:30:31 +0000   Thu, 29 Aug 2024 19:30:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-505269-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc422cc060b34981a3c71775f3af90fa
	  System UUID:                dc422cc0-60b3-4981-a3c7-1775f3af90fa
	  Boot ID:                    ed67c8ee-a9f9-4390-bfbf-4216db72f9b4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hcgzg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-505269-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-sthc8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-505269-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-505269-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-jxbdt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-505269-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-505269-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 104s                   kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-505269-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-505269-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-505269-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-505269-m02 event: Registered Node ha-505269-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-505269-m02 event: Registered Node ha-505269-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-505269-m02 event: Registered Node ha-505269-m02 in Controller
	  Normal  NodeNotReady             9m21s                  node-controller  Node ha-505269-m02 status is now: NodeNotReady
	  Normal  Starting                 2m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m35s (x8 over 2m35s)  kubelet          Node ha-505269-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m35s (x8 over 2m35s)  kubelet          Node ha-505269-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m35s (x7 over 2m35s)  kubelet          Node ha-505269-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m8s                   node-controller  Node ha-505269-m02 event: Registered Node ha-505269-m02 in Controller
	  Normal  RegisteredNode           104s                   node-controller  Node ha-505269-m02 event: Registered Node ha-505269-m02 in Controller
	  Normal  RegisteredNode           39s                    node-controller  Node ha-505269-m02 event: Registered Node ha-505269-m02 in Controller
	
	
	Name:               ha-505269-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-505269-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=ha-505269
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_29T19_20_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:20:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-505269-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:31:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:31:37 +0000   Thu, 29 Aug 2024 19:31:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:31:37 +0000   Thu, 29 Aug 2024 19:31:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:31:37 +0000   Thu, 29 Aug 2024 19:31:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:31:37 +0000   Thu, 29 Aug 2024 19:31:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.178
	  Hostname:    ha-505269-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7fc042d3e84d419187ce4fd6ad6a07e3
	  System UUID:                7fc042d3-e84d-4191-87ce-4fd6ad6a07e3
	  Boot ID:                    720bbe91-c020-4782-a99b-3bd4fdcf13b4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2fh45                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-505269-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-lr2lx                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-505269-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-505269-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-s6zxk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-505269-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-505269-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 42s                kube-proxy       
	  Normal   RegisteredNode           11m                node-controller  Node ha-505269-m03 event: Registered Node ha-505269-m03 in Controller
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-505269-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-505269-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-505269-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-505269-m03 event: Registered Node ha-505269-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-505269-m03 event: Registered Node ha-505269-m03 in Controller
	  Normal   RegisteredNode           2m8s               node-controller  Node ha-505269-m03 event: Registered Node ha-505269-m03 in Controller
	  Normal   RegisteredNode           104s               node-controller  Node ha-505269-m03 event: Registered Node ha-505269-m03 in Controller
	  Normal   NodeNotReady             88s                node-controller  Node ha-505269-m03 status is now: NodeNotReady
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  59s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 59s (x2 over 59s)  kubelet          Node ha-505269-m03 has been rebooted, boot id: 720bbe91-c020-4782-a99b-3bd4fdcf13b4
	  Normal   NodeHasSufficientMemory  59s (x3 over 59s)  kubelet          Node ha-505269-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x3 over 59s)  kubelet          Node ha-505269-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x3 over 59s)  kubelet          Node ha-505269-m03 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             59s                kubelet          Node ha-505269-m03 status is now: NodeNotReady
	  Normal   NodeReady                59s                kubelet          Node ha-505269-m03 status is now: NodeReady
	  Normal   RegisteredNode           39s                node-controller  Node ha-505269-m03 event: Registered Node ha-505269-m03 in Controller
	
	
	Name:               ha-505269-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-505269-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=ha-505269
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_29T19_21_30_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:21:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-505269-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:31:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:31:56 +0000   Thu, 29 Aug 2024 19:31:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:31:56 +0000   Thu, 29 Aug 2024 19:31:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:31:56 +0000   Thu, 29 Aug 2024 19:31:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:31:56 +0000   Thu, 29 Aug 2024 19:31:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    ha-505269-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a48c94cc9aca47538967ceed34ba2fed
	  System UUID:                a48c94cc-9aca-4753-8967-ceed34ba2fed
	  Boot ID:                    79a74b2c-30a1-4e3c-a4d1-a0d36fcb9738
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5lkbf       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-b5p66    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-505269-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-505269-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-505269-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-505269-m04 event: Registered Node ha-505269-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-505269-m04 event: Registered Node ha-505269-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-505269-m04 event: Registered Node ha-505269-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-505269-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m8s               node-controller  Node ha-505269-m04 event: Registered Node ha-505269-m04 in Controller
	  Normal   RegisteredNode           104s               node-controller  Node ha-505269-m04 event: Registered Node ha-505269-m04 in Controller
	  Normal   NodeNotReady             88s                node-controller  Node ha-505269-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           39s                node-controller  Node ha-505269-m04 event: Registered Node ha-505269-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9s                 kubelet          Node ha-505269-m04 has been rebooted, boot id: 79a74b2c-30a1-4e3c-a4d1-a0d36fcb9738
	  Normal   NodeReady                9s                 kubelet          Node ha-505269-m04 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  8s (x2 over 9s)    kubelet          Node ha-505269-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 9s)    kubelet          Node ha-505269-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 9s)    kubelet          Node ha-505269-m04 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.251780] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.063740] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055892] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.200106] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.121292] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.278954] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.975871] systemd-fstab-generator[753]: Ignoring "noauto" option for root device
	[Aug29 19:18] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.064250] kauditd_printk_skb: 158 callbacks suppressed
	[  +8.795220] kauditd_printk_skb: 79 callbacks suppressed
	[  +1.256542] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +6.194607] kauditd_printk_skb: 54 callbacks suppressed
	[Aug29 19:19] kauditd_printk_skb: 24 callbacks suppressed
	[Aug29 19:26] kauditd_printk_skb: 1 callbacks suppressed
	[Aug29 19:29] systemd-fstab-generator[3644]: Ignoring "noauto" option for root device
	[  +0.154243] systemd-fstab-generator[3656]: Ignoring "noauto" option for root device
	[  +0.177111] systemd-fstab-generator[3670]: Ignoring "noauto" option for root device
	[  +0.150845] systemd-fstab-generator[3682]: Ignoring "noauto" option for root device
	[  +0.294875] systemd-fstab-generator[3710]: Ignoring "noauto" option for root device
	[  +0.793246] systemd-fstab-generator[3803]: Ignoring "noauto" option for root device
	[  +3.742669] kauditd_printk_skb: 122 callbacks suppressed
	[ +12.234337] kauditd_printk_skb: 85 callbacks suppressed
	[ +10.059149] kauditd_printk_skb: 1 callbacks suppressed
	[ +22.864317] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [52fd2d668a925e55c334400b5d61dde55232d771983a9073e97f0ff37a6062fc] <==
	2024/08/29 19:27:35 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/29 19:27:35 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/29 19:27:35 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/29 19:27:35 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-29T19:27:35.173724Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.56:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-29T19:27:35.173823Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.56:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-29T19:27:35.173899Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"be139f16c87a8e87","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-29T19:27:35.175242Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"info","ts":"2024-08-29T19:27:35.175482Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"info","ts":"2024-08-29T19:27:35.175527Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"info","ts":"2024-08-29T19:27:35.175587Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"be139f16c87a8e87","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"info","ts":"2024-08-29T19:27:35.175668Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"be139f16c87a8e87","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"info","ts":"2024-08-29T19:27:35.175727Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"be139f16c87a8e87","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"info","ts":"2024-08-29T19:27:35.175755Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"info","ts":"2024-08-29T19:27:35.175780Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f5dc62d3f2837830"}
	{"level":"info","ts":"2024-08-29T19:27:35.176129Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f5dc62d3f2837830"}
	{"level":"info","ts":"2024-08-29T19:27:35.176242Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f5dc62d3f2837830"}
	{"level":"info","ts":"2024-08-29T19:27:35.176353Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830"}
	{"level":"info","ts":"2024-08-29T19:27:35.176410Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830"}
	{"level":"info","ts":"2024-08-29T19:27:35.176469Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830"}
	{"level":"info","ts":"2024-08-29T19:27:35.176498Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f5dc62d3f2837830"}
	{"level":"info","ts":"2024-08-29T19:27:35.182723Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.56:2380"}
	{"level":"info","ts":"2024-08-29T19:27:35.182846Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.56:2380"}
	{"level":"info","ts":"2024-08-29T19:27:35.182871Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-505269","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.56:2380"],"advertise-client-urls":["https://192.168.39.56:2379"]}
	{"level":"warn","ts":"2024-08-29T19:27:35.182902Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.575390709s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	
	
	==> etcd [ed3f6edccac07af4c68e2df70de1901a354b7e466a6480aa00a8c242372a1489] <==
	{"level":"warn","ts":"2024-08-29T19:31:01.695464Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"be139f16c87a8e87","from":"be139f16c87a8e87","remote-peer-id":"324dbe8b03e4639e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T19:31:03.412623Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"324dbe8b03e4639e","rtt":"0s","error":"dial tcp 192.168.39.178:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-29T19:31:03.412768Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"324dbe8b03e4639e","rtt":"0s","error":"dial tcp 192.168.39.178:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-29T19:31:04.356798Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.178:2380/version","remote-member-id":"324dbe8b03e4639e","error":"Get \"https://192.168.39.178:2380/version\": dial tcp 192.168.39.178:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-29T19:31:04.356872Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"324dbe8b03e4639e","error":"Get \"https://192.168.39.178:2380/version\": dial tcp 192.168.39.178:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-29T19:31:08.359340Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.178:2380/version","remote-member-id":"324dbe8b03e4639e","error":"Get \"https://192.168.39.178:2380/version\": dial tcp 192.168.39.178:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-29T19:31:08.359476Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"324dbe8b03e4639e","error":"Get \"https://192.168.39.178:2380/version\": dial tcp 192.168.39.178:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-29T19:31:08.413652Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"324dbe8b03e4639e","rtt":"0s","error":"dial tcp 192.168.39.178:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-29T19:31:08.413766Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"324dbe8b03e4639e","rtt":"0s","error":"dial tcp 192.168.39.178:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-29T19:31:12.361690Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.178:2380/version","remote-member-id":"324dbe8b03e4639e","error":"Get \"https://192.168.39.178:2380/version\": dial tcp 192.168.39.178:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-29T19:31:12.361791Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"324dbe8b03e4639e","error":"Get \"https://192.168.39.178:2380/version\": dial tcp 192.168.39.178:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-29T19:31:13.413793Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"324dbe8b03e4639e","rtt":"0s","error":"dial tcp 192.168.39.178:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-29T19:31:13.413852Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"324dbe8b03e4639e","rtt":"0s","error":"dial tcp 192.168.39.178:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-29T19:31:13.794872Z","caller":"traceutil/trace.go:171","msg":"trace[1352708963] transaction","detail":"{read_only:false; response_revision:2449; number_of_response:1; }","duration":"166.27728ms","start":"2024-08-29T19:31:13.628558Z","end":"2024-08-29T19:31:13.794835Z","steps":["trace[1352708963] 'process raft request'  (duration: 166.034524ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T19:31:16.364656Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.178:2380/version","remote-member-id":"324dbe8b03e4639e","error":"Get \"https://192.168.39.178:2380/version\": dial tcp 192.168.39.178:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-29T19:31:16.364790Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"324dbe8b03e4639e","error":"Get \"https://192.168.39.178:2380/version\": dial tcp 192.168.39.178:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-29T19:31:17.999218Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"324dbe8b03e4639e"}
	{"level":"info","ts":"2024-08-29T19:31:17.999308Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"be139f16c87a8e87","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"info","ts":"2024-08-29T19:31:17.999809Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"be139f16c87a8e87","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"info","ts":"2024-08-29T19:31:18.004823Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"be139f16c87a8e87","to":"324dbe8b03e4639e","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-29T19:31:18.004914Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"be139f16c87a8e87","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"info","ts":"2024-08-29T19:31:18.016370Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"be139f16c87a8e87","to":"324dbe8b03e4639e","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-29T19:31:18.016419Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"be139f16c87a8e87","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"warn","ts":"2024-08-29T19:31:18.414087Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"324dbe8b03e4639e","rtt":"0s","error":"dial tcp 192.168.39.178:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-29T19:31:18.414152Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"324dbe8b03e4639e","rtt":"0s","error":"dial tcp 192.168.39.178:2380: connect: connection refused"}
	
	
	==> kernel <==
	 19:32:05 up 14 min,  0 users,  load average: 1.07, 0.99, 0.60
	Linux ha-505269 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [d575c067854c198c0503704ca102d4bb9e9823a228df479b19a5ef172e162fc8] <==
	I0829 19:31:33.586103       1 main.go:322] Node ha-505269-m04 has CIDR [10.244.3.0/24] 
	I0829 19:31:43.579078       1 main.go:295] Handling node with IPs: map[192.168.39.56:{}]
	I0829 19:31:43.579352       1 main.go:299] handling current node
	I0829 19:31:43.579393       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0829 19:31:43.579404       1 main.go:322] Node ha-505269-m02 has CIDR [10.244.1.0/24] 
	I0829 19:31:43.579576       1 main.go:295] Handling node with IPs: map[192.168.39.178:{}]
	I0829 19:31:43.579609       1 main.go:322] Node ha-505269-m03 has CIDR [10.244.2.0/24] 
	I0829 19:31:43.579687       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0829 19:31:43.579715       1 main.go:322] Node ha-505269-m04 has CIDR [10.244.3.0/24] 
	I0829 19:31:53.585834       1 main.go:295] Handling node with IPs: map[192.168.39.56:{}]
	I0829 19:31:53.586019       1 main.go:299] handling current node
	I0829 19:31:53.586061       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0829 19:31:53.586071       1 main.go:322] Node ha-505269-m02 has CIDR [10.244.1.0/24] 
	I0829 19:31:53.586270       1 main.go:295] Handling node with IPs: map[192.168.39.178:{}]
	I0829 19:31:53.586299       1 main.go:322] Node ha-505269-m03 has CIDR [10.244.2.0/24] 
	I0829 19:31:53.586366       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0829 19:31:53.586371       1 main.go:322] Node ha-505269-m04 has CIDR [10.244.3.0/24] 
	I0829 19:32:03.581461       1 main.go:295] Handling node with IPs: map[192.168.39.56:{}]
	I0829 19:32:03.581520       1 main.go:299] handling current node
	I0829 19:32:03.581538       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0829 19:32:03.581546       1 main.go:322] Node ha-505269-m02 has CIDR [10.244.1.0/24] 
	I0829 19:32:03.581690       1 main.go:295] Handling node with IPs: map[192.168.39.178:{}]
	I0829 19:32:03.581724       1 main.go:322] Node ha-505269-m03 has CIDR [10.244.2.0/24] 
	I0829 19:32:03.581812       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0829 19:32:03.581840       1 main.go:322] Node ha-505269-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [f5e9dd792be09a6e1909afd5dca42b303bac8476ef04f3254480b0f21ac53604] <==
	I0829 19:27:01.258595       1 main.go:322] Node ha-505269-m04 has CIDR [10.244.3.0/24] 
	I0829 19:27:11.258120       1 main.go:295] Handling node with IPs: map[192.168.39.56:{}]
	I0829 19:27:11.258232       1 main.go:299] handling current node
	I0829 19:27:11.258273       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0829 19:27:11.258292       1 main.go:322] Node ha-505269-m02 has CIDR [10.244.1.0/24] 
	I0829 19:27:11.258467       1 main.go:295] Handling node with IPs: map[192.168.39.178:{}]
	I0829 19:27:11.258534       1 main.go:322] Node ha-505269-m03 has CIDR [10.244.2.0/24] 
	I0829 19:27:11.258630       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0829 19:27:11.258652       1 main.go:322] Node ha-505269-m04 has CIDR [10.244.3.0/24] 
	I0829 19:27:21.258540       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0829 19:27:21.258576       1 main.go:322] Node ha-505269-m04 has CIDR [10.244.3.0/24] 
	I0829 19:27:21.258721       1 main.go:295] Handling node with IPs: map[192.168.39.56:{}]
	I0829 19:27:21.258727       1 main.go:299] handling current node
	I0829 19:27:21.258743       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0829 19:27:21.258747       1 main.go:322] Node ha-505269-m02 has CIDR [10.244.1.0/24] 
	I0829 19:27:21.258791       1 main.go:295] Handling node with IPs: map[192.168.39.178:{}]
	I0829 19:27:21.258795       1 main.go:322] Node ha-505269-m03 has CIDR [10.244.2.0/24] 
	I0829 19:27:31.267130       1 main.go:295] Handling node with IPs: map[192.168.39.178:{}]
	I0829 19:27:31.267398       1 main.go:322] Node ha-505269-m03 has CIDR [10.244.2.0/24] 
	I0829 19:27:31.267644       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0829 19:27:31.267692       1 main.go:322] Node ha-505269-m04 has CIDR [10.244.3.0/24] 
	I0829 19:27:31.267786       1 main.go:295] Handling node with IPs: map[192.168.39.56:{}]
	I0829 19:27:31.267807       1 main.go:299] handling current node
	I0829 19:27:31.267844       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0829 19:27:31.267860       1 main.go:322] Node ha-505269-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [48ea370e88590ef75080636cd247905edf96c5c382c3fee2a7cf22b8c428407c] <==
	I0829 19:29:12.739081       1 options.go:228] external host was not specified, using 192.168.39.56
	I0829 19:29:12.750599       1 server.go:142] Version: v1.31.0
	I0829 19:29:12.750630       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:29:13.316201       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0829 19:29:13.342872       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0829 19:29:13.351920       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0829 19:29:13.351958       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0829 19:29:13.352197       1 instance.go:232] Using reconciler: lease
	W0829 19:29:33.315496       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0829 19:29:33.315829       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0829 19:29:33.353148       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [b090b02a2f6af71741e9e17cccfeb77b790febb24ce1fb5ec191d971424b5378] <==
	I0829 19:30:00.442199       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0829 19:30:00.536598       1 shared_informer.go:320] Caches are synced for configmaps
	I0829 19:30:00.536750       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0829 19:30:00.536778       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0829 19:30:00.542462       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0829 19:30:00.544197       1 aggregator.go:171] initial CRD sync complete...
	I0829 19:30:00.544240       1 autoregister_controller.go:144] Starting autoregister controller
	I0829 19:30:00.544247       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0829 19:30:00.544257       1 cache.go:39] Caches are synced for autoregister controller
	I0829 19:30:00.545618       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0829 19:30:00.552119       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0829 19:30:00.552225       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	W0829 19:30:00.562428       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.178 192.168.39.68]
	I0829 19:30:00.575914       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0829 19:30:00.576087       1 policy_source.go:224] refreshing policies
	I0829 19:30:00.576303       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0829 19:30:00.590887       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0829 19:30:00.634740       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0829 19:30:00.637459       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0829 19:30:00.665600       1 controller.go:615] quota admission added evaluator for: endpoints
	I0829 19:30:00.677192       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0829 19:30:00.680383       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0829 19:30:01.443128       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0829 19:30:01.794021       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.178 192.168.39.56 192.168.39.68]
	W0829 19:30:11.793500       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.56 192.168.39.68]
	
	
	==> kube-controller-manager [39bb7e864296eb98a7fa1d4cf9729fd6fe8a035d7acdcb8024f1172b92b20424] <==
	I0829 19:29:45.889693       1 serving.go:386] Generated self-signed cert in-memory
	I0829 19:29:46.599752       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0829 19:29:46.599855       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:29:46.601414       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0829 19:29:46.601582       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0829 19:29:46.601648       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0829 19:29:46.601725       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0829 19:29:56.605285       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.56:8443/healthz\": dial tcp 192.168.39.56:8443: connect: connection refused"
	
	
	==> kube-controller-manager [d24eca9daeddb01201f9b34221347d5cb309931c118348968da060d6c1bf376c] <==
	I0829 19:30:37.723725       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m03"
	I0829 19:30:37.724424       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:30:37.754594       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:30:37.758313       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m03"
	I0829 19:30:37.858778       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="23.374686ms"
	I0829 19:30:37.859030       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="83.25µs"
	I0829 19:30:41.995252       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m03"
	I0829 19:30:43.072886       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m03"
	I0829 19:30:51.820709       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="21.296943ms"
	I0829 19:30:51.821121       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="84.79µs"
	I0829 19:30:52.079250       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:30:53.158728       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:31:06.605954       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m03"
	I0829 19:31:06.632392       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m03"
	I0829 19:31:06.970465       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m03"
	I0829 19:31:07.360236       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="93.506µs"
	I0829 19:31:24.074069       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.791147ms"
	I0829 19:31:24.074831       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="210.582µs"
	I0829 19:31:26.356682       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:31:26.413493       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:31:37.212491       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m03"
	I0829 19:31:56.937416       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:31:56.938122       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-505269-m04"
	I0829 19:31:56.954265       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:31:56.993279       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	
	
	==> kube-proxy [46b7c110f2f7341eb54db4c08f1a782899e9205e069ca96dfbf295e38d3fc601] <==
	I0829 19:29:55.712702       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 19:29:55.713329       1 server.go:483] "Version info" version="v1.31.0"
	I0829 19:29:55.713379       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:29:55.715361       1 config.go:197] "Starting service config controller"
	I0829 19:29:55.715438       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 19:29:55.715478       1 config.go:104] "Starting endpoint slice config controller"
	I0829 19:29:55.715494       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 19:29:55.716372       1 config.go:326] "Starting node config controller"
	I0829 19:29:55.716411       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0829 19:29:58.746205       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0829 19:29:58.746369       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:29:58.746511       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:29:58.746612       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:29:58.746671       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:29:58.746755       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-505269&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:29:58.746792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-505269&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:30:01.817856       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:30:01.817945       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:30:01.818116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:30:01.818601       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:30:01.818688       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-505269&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:30:01.818724       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-505269&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	I0829 19:30:04.216060       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 19:30:04.617083       1 shared_informer.go:320] Caches are synced for node config
	I0829 19:30:04.815764       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [9b0cc96d9477c950d00d6920b1c4466bfd2e75dc0980383cdf073b75f577c37c] <==
	E0829 19:26:29.851423       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1856\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:26:29.851672       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-505269&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:26:29.851845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-505269&resourceVersion=1900\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:26:32.923030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-505269&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:26:32.923136       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-505269&resourceVersion=1900\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:26:32.923047       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:26:32.923285       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1856\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:26:35.994951       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1865": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:26:35.995163       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1865\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:26:39.066808       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-505269&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:26:39.067021       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-505269&resourceVersion=1900\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:26:39.067112       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:26:39.067162       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1856\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:26:48.282373       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1865": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:26:48.282601       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1865\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:26:51.355204       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:26:51.355653       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1856\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:26:51.355840       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-505269&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:26:51.355900       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-505269&resourceVersion=1900\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:27:09.787676       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-505269&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:27:09.787778       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-505269&resourceVersion=1900\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:27:09.787889       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1865": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:27:09.787926       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1865\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:27:19.002377       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:27:19.002559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1856\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [d1f91ce133bedd34081facf255eed45036eacfd938fbf58545d302a01d638dc0] <==
	E0829 19:21:30.554445       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-shg8j\": pod kube-proxy-shg8j is already assigned to node \"ha-505269-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-shg8j" node="ha-505269-m04"
	E0829 19:21:30.554616       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 05405fa6-d40f-446d-ad32-18b243d7b162(kube-system/kube-proxy-shg8j) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-shg8j"
	E0829 19:21:30.554728       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-shg8j\": pod kube-proxy-shg8j is already assigned to node \"ha-505269-m04\"" pod="kube-system/kube-proxy-shg8j"
	I0829 19:21:30.554863       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-shg8j" node="ha-505269-m04"
	E0829 19:21:30.555526       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5lkbf\": pod kindnet-5lkbf is already assigned to node \"ha-505269-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-5lkbf" node="ha-505269-m04"
	E0829 19:21:30.558296       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 112e2462-a26a-4f91-a405-dab3468f9071(kube-system/kindnet-5lkbf) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-5lkbf"
	E0829 19:21:30.559049       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5lkbf\": pod kindnet-5lkbf is already assigned to node \"ha-505269-m04\"" pod="kube-system/kindnet-5lkbf"
	I0829 19:21:30.559106       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-5lkbf" node="ha-505269-m04"
	E0829 19:27:25.046614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0829 19:27:25.874772       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0829 19:27:27.555185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0829 19:27:28.682860       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0829 19:27:28.861646       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0829 19:27:29.068173       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0829 19:27:29.565552       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0829 19:27:29.606070       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0829 19:27:30.829134       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0829 19:27:31.977335       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0829 19:27:32.317188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0829 19:27:32.374845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0829 19:27:32.908740       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0829 19:27:33.024045       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	I0829 19:27:35.120341       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0829 19:27:35.121683       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0829 19:27:35.124295       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [ee8547993a84026cfd6c27f9f068228b1ea619a9dee19e99e3b8d07b22b23584] <==
	W0829 19:29:51.973513       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.56:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.56:8443: connect: connection refused
	E0829 19:29:51.973590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.56:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.56:8443: connect: connection refused" logger="UnhandledError"
	W0829 19:29:52.090201       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.56:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.56:8443: connect: connection refused
	E0829 19:29:52.090267       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.56:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.56:8443: connect: connection refused" logger="UnhandledError"
	W0829 19:29:52.679688       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.56:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.56:8443: connect: connection refused
	E0829 19:29:52.679748       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.56:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.56:8443: connect: connection refused" logger="UnhandledError"
	W0829 19:29:53.108961       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.56:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.56:8443: connect: connection refused
	E0829 19:29:53.109234       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.56:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.56:8443: connect: connection refused" logger="UnhandledError"
	W0829 19:29:53.669469       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.56:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.56:8443: connect: connection refused
	E0829 19:29:53.669560       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.56:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.56:8443: connect: connection refused" logger="UnhandledError"
	W0829 19:29:53.763460       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.56:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.56:8443: connect: connection refused
	E0829 19:29:53.763538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.56:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.56:8443: connect: connection refused" logger="UnhandledError"
	W0829 19:29:53.787958       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.56:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.56:8443: connect: connection refused
	E0829 19:29:53.788140       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.56:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.56:8443: connect: connection refused" logger="UnhandledError"
	W0829 19:29:53.839265       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.56:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.56:8443: connect: connection refused
	E0829 19:29:53.839381       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.56:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.56:8443: connect: connection refused" logger="UnhandledError"
	W0829 19:29:54.211253       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.56:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.56:8443: connect: connection refused
	E0829 19:29:54.211336       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.56:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.56:8443: connect: connection refused" logger="UnhandledError"
	W0829 19:29:54.860090       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.56:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.56:8443: connect: connection refused
	E0829 19:29:54.860151       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.56:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.56:8443: connect: connection refused" logger="UnhandledError"
	W0829 19:29:55.624620       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.56:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.56:8443: connect: connection refused
	E0829 19:29:55.624746       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.56:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.56:8443: connect: connection refused" logger="UnhandledError"
	W0829 19:29:56.826084       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.56:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.56:8443: connect: connection refused
	E0829 19:29:56.826135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.56:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.56:8443: connect: connection refused" logger="UnhandledError"
	I0829 19:30:15.367741       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 19:31:12 ha-505269 kubelet[1314]: E0829 19:31:12.351056    1314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6b7cd00a-94da-4e42-b7ae-289aab759c4f)\"" pod="kube-system/storage-provisioner" podUID="6b7cd00a-94da-4e42-b7ae-289aab759c4f"
	Aug 29 19:31:14 ha-505269 kubelet[1314]: E0829 19:31:14.381287    1314 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 19:31:14 ha-505269 kubelet[1314]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 19:31:14 ha-505269 kubelet[1314]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 19:31:14 ha-505269 kubelet[1314]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 19:31:14 ha-505269 kubelet[1314]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 19:31:14 ha-505269 kubelet[1314]: E0829 19:31:14.575708    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959874575100501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:31:14 ha-505269 kubelet[1314]: E0829 19:31:14.575745    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959874575100501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:31:24 ha-505269 kubelet[1314]: E0829 19:31:24.578691    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959884578113740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:31:24 ha-505269 kubelet[1314]: E0829 19:31:24.579125    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959884578113740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:31:26 ha-505269 kubelet[1314]: I0829 19:31:26.349059    1314 scope.go:117] "RemoveContainer" containerID="de469f77c1414c9aeb375f6fab95f9bf73d297cf39f8e3032d44a27fce129160"
	Aug 29 19:31:26 ha-505269 kubelet[1314]: E0829 19:31:26.349704    1314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6b7cd00a-94da-4e42-b7ae-289aab759c4f)\"" pod="kube-system/storage-provisioner" podUID="6b7cd00a-94da-4e42-b7ae-289aab759c4f"
	Aug 29 19:31:34 ha-505269 kubelet[1314]: E0829 19:31:34.581629    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959894581224331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:31:34 ha-505269 kubelet[1314]: E0829 19:31:34.581657    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959894581224331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:31:40 ha-505269 kubelet[1314]: I0829 19:31:40.349076    1314 scope.go:117] "RemoveContainer" containerID="de469f77c1414c9aeb375f6fab95f9bf73d297cf39f8e3032d44a27fce129160"
	Aug 29 19:31:40 ha-505269 kubelet[1314]: E0829 19:31:40.349552    1314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6b7cd00a-94da-4e42-b7ae-289aab759c4f)\"" pod="kube-system/storage-provisioner" podUID="6b7cd00a-94da-4e42-b7ae-289aab759c4f"
	Aug 29 19:31:44 ha-505269 kubelet[1314]: E0829 19:31:44.583599    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959904583266298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:31:44 ha-505269 kubelet[1314]: E0829 19:31:44.583648    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959904583266298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:31:52 ha-505269 kubelet[1314]: I0829 19:31:52.349395    1314 scope.go:117] "RemoveContainer" containerID="de469f77c1414c9aeb375f6fab95f9bf73d297cf39f8e3032d44a27fce129160"
	Aug 29 19:31:52 ha-505269 kubelet[1314]: E0829 19:31:52.351887    1314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6b7cd00a-94da-4e42-b7ae-289aab759c4f)\"" pod="kube-system/storage-provisioner" podUID="6b7cd00a-94da-4e42-b7ae-289aab759c4f"
	Aug 29 19:31:54 ha-505269 kubelet[1314]: E0829 19:31:54.585686    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959914585330182,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:31:54 ha-505269 kubelet[1314]: E0829 19:31:54.586178    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959914585330182,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:32:04 ha-505269 kubelet[1314]: E0829 19:32:04.587817    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959924587544296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:32:04 ha-505269 kubelet[1314]: E0829 19:32:04.587867    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959924587544296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:32:06 ha-505269 kubelet[1314]: I0829 19:32:06.349767    1314 scope.go:117] "RemoveContainer" containerID="de469f77c1414c9aeb375f6fab95f9bf73d297cf39f8e3032d44a27fce129160"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 19:32:04.793167   38002 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19530-11185/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-505269 -n ha-505269
helpers_test.go:261: (dbg) Run:  kubectl --context ha-505269 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (394.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 stop -v=7 --alsologtostderr
E0829 19:33:45.975330   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-505269 stop -v=7 --alsologtostderr: exit status 82 (2m0.46125885s)

                                                
                                                
-- stdout --
	* Stopping node "ha-505269-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:32:23.949141   38412 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:32:23.949236   38412 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:32:23.949243   38412 out.go:358] Setting ErrFile to fd 2...
	I0829 19:32:23.949248   38412 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:32:23.949429   38412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 19:32:23.949627   38412 out.go:352] Setting JSON to false
	I0829 19:32:23.949694   38412 mustload.go:65] Loading cluster: ha-505269
	I0829 19:32:23.950081   38412 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:32:23.950164   38412 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/config.json ...
	I0829 19:32:23.950331   38412 mustload.go:65] Loading cluster: ha-505269
	I0829 19:32:23.950452   38412 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:32:23.950471   38412 stop.go:39] StopHost: ha-505269-m04
	I0829 19:32:23.950867   38412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:32:23.950914   38412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:32:23.966011   38412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45633
	I0829 19:32:23.966474   38412 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:32:23.967140   38412 main.go:141] libmachine: Using API Version  1
	I0829 19:32:23.967171   38412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:32:23.967542   38412 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:32:23.970123   38412 out.go:177] * Stopping node "ha-505269-m04"  ...
	I0829 19:32:23.971382   38412 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0829 19:32:23.971419   38412 main.go:141] libmachine: (ha-505269-m04) Calling .DriverName
	I0829 19:32:23.971703   38412 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0829 19:32:23.971745   38412 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHHostname
	I0829 19:32:23.974953   38412 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:32:23.975368   38412 main.go:141] libmachine: (ha-505269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:46:e7", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:31:51 +0000 UTC Type:0 Mac:52:54:00:44:46:e7 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-505269-m04 Clientid:01:52:54:00:44:46:e7}
	I0829 19:32:23.975412   38412 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined IP address 192.168.39.101 and MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:32:23.975517   38412 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHPort
	I0829 19:32:23.975702   38412 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHKeyPath
	I0829 19:32:23.975883   38412 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHUsername
	I0829 19:32:23.976067   38412 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m04/id_rsa Username:docker}
	I0829 19:32:24.060845   38412 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0829 19:32:24.114043   38412 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0829 19:32:24.166757   38412 main.go:141] libmachine: Stopping "ha-505269-m04"...
	I0829 19:32:24.166781   38412 main.go:141] libmachine: (ha-505269-m04) Calling .GetState
	I0829 19:32:24.168447   38412 main.go:141] libmachine: (ha-505269-m04) Calling .Stop
	I0829 19:32:24.172038   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 0/120
	I0829 19:32:25.173419   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 1/120
	I0829 19:32:26.174856   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 2/120
	I0829 19:32:27.177106   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 3/120
	I0829 19:32:28.178725   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 4/120
	I0829 19:32:29.180433   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 5/120
	I0829 19:32:30.182122   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 6/120
	I0829 19:32:31.183270   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 7/120
	I0829 19:32:32.184954   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 8/120
	I0829 19:32:33.186406   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 9/120
	I0829 19:32:34.188694   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 10/120
	I0829 19:32:35.190091   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 11/120
	I0829 19:32:36.191517   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 12/120
	I0829 19:32:37.193020   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 13/120
	I0829 19:32:38.194263   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 14/120
	I0829 19:32:39.196152   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 15/120
	I0829 19:32:40.197654   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 16/120
	I0829 19:32:41.198923   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 17/120
	I0829 19:32:42.200247   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 18/120
	I0829 19:32:43.202085   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 19/120
	I0829 19:32:44.204292   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 20/120
	I0829 19:32:45.205676   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 21/120
	I0829 19:32:46.206931   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 22/120
	I0829 19:32:47.209126   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 23/120
	I0829 19:32:48.210575   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 24/120
	I0829 19:32:49.212344   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 25/120
	I0829 19:32:50.213779   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 26/120
	I0829 19:32:51.215112   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 27/120
	I0829 19:32:52.216506   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 28/120
	I0829 19:32:53.217781   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 29/120
	I0829 19:32:54.219282   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 30/120
	I0829 19:32:55.220981   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 31/120
	I0829 19:32:56.222302   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 32/120
	I0829 19:32:57.224203   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 33/120
	I0829 19:32:58.225463   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 34/120
	I0829 19:32:59.227393   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 35/120
	I0829 19:33:00.229064   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 36/120
	I0829 19:33:01.230676   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 37/120
	I0829 19:33:02.231958   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 38/120
	I0829 19:33:03.233370   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 39/120
	I0829 19:33:04.235519   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 40/120
	I0829 19:33:05.236931   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 41/120
	I0829 19:33:06.239203   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 42/120
	I0829 19:33:07.241040   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 43/120
	I0829 19:33:08.242396   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 44/120
	I0829 19:33:09.244368   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 45/120
	I0829 19:33:10.245520   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 46/120
	I0829 19:33:11.246962   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 47/120
	I0829 19:33:12.248385   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 48/120
	I0829 19:33:13.249552   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 49/120
	I0829 19:33:14.251563   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 50/120
	I0829 19:33:15.252834   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 51/120
	I0829 19:33:16.254039   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 52/120
	I0829 19:33:17.255449   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 53/120
	I0829 19:33:18.256626   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 54/120
	I0829 19:33:19.258723   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 55/120
	I0829 19:33:20.261023   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 56/120
	I0829 19:33:21.262765   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 57/120
	I0829 19:33:22.264030   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 58/120
	I0829 19:33:23.265367   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 59/120
	I0829 19:33:24.267398   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 60/120
	I0829 19:33:25.268687   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 61/120
	I0829 19:33:26.269994   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 62/120
	I0829 19:33:27.271554   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 63/120
	I0829 19:33:28.273091   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 64/120
	I0829 19:33:29.274883   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 65/120
	I0829 19:33:30.276344   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 66/120
	I0829 19:33:31.277694   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 67/120
	I0829 19:33:32.280210   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 68/120
	I0829 19:33:33.281619   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 69/120
	I0829 19:33:34.283778   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 70/120
	I0829 19:33:35.286287   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 71/120
	I0829 19:33:36.287786   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 72/120
	I0829 19:33:37.289157   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 73/120
	I0829 19:33:38.290617   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 74/120
	I0829 19:33:39.292491   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 75/120
	I0829 19:33:40.294240   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 76/120
	I0829 19:33:41.295814   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 77/120
	I0829 19:33:42.297144   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 78/120
	I0829 19:33:43.298328   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 79/120
	I0829 19:33:44.300346   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 80/120
	I0829 19:33:45.302167   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 81/120
	I0829 19:33:46.303362   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 82/120
	I0829 19:33:47.304494   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 83/120
	I0829 19:33:48.306657   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 84/120
	I0829 19:33:49.308207   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 85/120
	I0829 19:33:50.309620   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 86/120
	I0829 19:33:51.310838   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 87/120
	I0829 19:33:52.312304   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 88/120
	I0829 19:33:53.313729   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 89/120
	I0829 19:33:54.315653   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 90/120
	I0829 19:33:55.317007   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 91/120
	I0829 19:33:56.318479   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 92/120
	I0829 19:33:57.319833   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 93/120
	I0829 19:33:58.321145   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 94/120
	I0829 19:33:59.323005   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 95/120
	I0829 19:34:00.324888   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 96/120
	I0829 19:34:01.326181   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 97/120
	I0829 19:34:02.327523   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 98/120
	I0829 19:34:03.329043   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 99/120
	I0829 19:34:04.331126   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 100/120
	I0829 19:34:05.332914   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 101/120
	I0829 19:34:06.334002   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 102/120
	I0829 19:34:07.335244   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 103/120
	I0829 19:34:08.336916   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 104/120
	I0829 19:34:09.338585   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 105/120
	I0829 19:34:10.339765   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 106/120
	I0829 19:34:11.341116   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 107/120
	I0829 19:34:12.342450   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 108/120
	I0829 19:34:13.344082   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 109/120
	I0829 19:34:14.346050   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 110/120
	I0829 19:34:15.347256   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 111/120
	I0829 19:34:16.348990   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 112/120
	I0829 19:34:17.350367   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 113/120
	I0829 19:34:18.351678   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 114/120
	I0829 19:34:19.353849   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 115/120
	I0829 19:34:20.355224   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 116/120
	I0829 19:34:21.356451   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 117/120
	I0829 19:34:22.357746   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 118/120
	I0829 19:34:23.359055   38412 main.go:141] libmachine: (ha-505269-m04) Waiting for machine to stop 119/120
	I0829 19:34:24.360194   38412 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0829 19:34:24.360240   38412 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0829 19:34:24.362329   38412 out.go:201] 
	W0829 19:34:24.363683   38412 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0829 19:34:24.363695   38412 out.go:270] * 
	* 
	W0829 19:34:24.366045   38412 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 19:34:24.367235   38412 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-505269 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-505269 status -v=7 --alsologtostderr: exit status 3 (18.929200276s)

                                                
                                                
-- stdout --
	ha-505269
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-505269-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-505269-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:34:24.414393   38858 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:34:24.414671   38858 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:34:24.414684   38858 out.go:358] Setting ErrFile to fd 2...
	I0829 19:34:24.414688   38858 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:34:24.414891   38858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 19:34:24.415110   38858 out.go:352] Setting JSON to false
	I0829 19:34:24.415132   38858 mustload.go:65] Loading cluster: ha-505269
	I0829 19:34:24.415182   38858 notify.go:220] Checking for updates...
	I0829 19:34:24.415635   38858 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:34:24.415658   38858 status.go:255] checking status of ha-505269 ...
	I0829 19:34:24.416092   38858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:34:24.416131   38858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:34:24.441937   38858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45653
	I0829 19:34:24.442443   38858 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:34:24.443023   38858 main.go:141] libmachine: Using API Version  1
	I0829 19:34:24.443046   38858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:34:24.443521   38858 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:34:24.443749   38858 main.go:141] libmachine: (ha-505269) Calling .GetState
	I0829 19:34:24.445352   38858 status.go:330] ha-505269 host status = "Running" (err=<nil>)
	I0829 19:34:24.445368   38858 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:34:24.445740   38858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:34:24.445810   38858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:34:24.460823   38858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45549
	I0829 19:34:24.461215   38858 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:34:24.461692   38858 main.go:141] libmachine: Using API Version  1
	I0829 19:34:24.461711   38858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:34:24.462010   38858 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:34:24.462194   38858 main.go:141] libmachine: (ha-505269) Calling .GetIP
	I0829 19:34:24.464860   38858 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:34:24.465294   38858 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:34:24.465331   38858 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:34:24.465487   38858 host.go:66] Checking if "ha-505269" exists ...
	I0829 19:34:24.465806   38858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:34:24.465847   38858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:34:24.481492   38858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37439
	I0829 19:34:24.481958   38858 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:34:24.482435   38858 main.go:141] libmachine: Using API Version  1
	I0829 19:34:24.482455   38858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:34:24.482809   38858 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:34:24.482975   38858 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:34:24.483187   38858 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:34:24.483215   38858 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:34:24.486098   38858 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:34:24.486486   38858 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:34:24.486515   38858 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:34:24.486624   38858 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:34:24.486785   38858 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:34:24.486920   38858 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:34:24.487049   38858 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:34:24.576637   38858 ssh_runner.go:195] Run: systemctl --version
	I0829 19:34:24.583313   38858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:34:24.599356   38858 kubeconfig.go:125] found "ha-505269" server: "https://192.168.39.254:8443"
	I0829 19:34:24.599388   38858 api_server.go:166] Checking apiserver status ...
	I0829 19:34:24.599417   38858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:24.615636   38858 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5044/cgroup
	W0829 19:34:24.625703   38858 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5044/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:34:24.625772   38858 ssh_runner.go:195] Run: ls
	I0829 19:34:24.630355   38858 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 19:34:24.636363   38858 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 19:34:24.636398   38858 status.go:422] ha-505269 apiserver status = Running (err=<nil>)
	I0829 19:34:24.636407   38858 status.go:257] ha-505269 status: &{Name:ha-505269 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:34:24.636422   38858 status.go:255] checking status of ha-505269-m02 ...
	I0829 19:34:24.636824   38858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:34:24.636870   38858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:34:24.652101   38858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32933
	I0829 19:34:24.652513   38858 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:34:24.653055   38858 main.go:141] libmachine: Using API Version  1
	I0829 19:34:24.653072   38858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:34:24.653374   38858 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:34:24.653585   38858 main.go:141] libmachine: (ha-505269-m02) Calling .GetState
	I0829 19:34:24.655342   38858 status.go:330] ha-505269-m02 host status = "Running" (err=<nil>)
	I0829 19:34:24.655358   38858 host.go:66] Checking if "ha-505269-m02" exists ...
	I0829 19:34:24.655624   38858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:34:24.655654   38858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:34:24.670195   38858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33963
	I0829 19:34:24.670600   38858 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:34:24.671117   38858 main.go:141] libmachine: Using API Version  1
	I0829 19:34:24.671135   38858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:34:24.671478   38858 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:34:24.671672   38858 main.go:141] libmachine: (ha-505269-m02) Calling .GetIP
	I0829 19:34:24.674498   38858 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:34:24.675006   38858 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:29:19 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:34:24.675026   38858 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:34:24.675171   38858 host.go:66] Checking if "ha-505269-m02" exists ...
	I0829 19:34:24.675462   38858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:34:24.675495   38858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:34:24.689992   38858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36407
	I0829 19:34:24.690393   38858 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:34:24.690940   38858 main.go:141] libmachine: Using API Version  1
	I0829 19:34:24.690960   38858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:34:24.691282   38858 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:34:24.691467   38858 main.go:141] libmachine: (ha-505269-m02) Calling .DriverName
	I0829 19:34:24.691680   38858 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:34:24.691703   38858 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHHostname
	I0829 19:34:24.694180   38858 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:34:24.694633   38858 main.go:141] libmachine: (ha-505269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:ef:8c", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:29:19 +0000 UTC Type:0 Mac:52:54:00:8f:ef:8c Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-505269-m02 Clientid:01:52:54:00:8f:ef:8c}
	I0829 19:34:24.694677   38858 main.go:141] libmachine: (ha-505269-m02) DBG | domain ha-505269-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:8f:ef:8c in network mk-ha-505269
	I0829 19:34:24.694773   38858 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHPort
	I0829 19:34:24.694953   38858 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHKeyPath
	I0829 19:34:24.695109   38858 main.go:141] libmachine: (ha-505269-m02) Calling .GetSSHUsername
	I0829 19:34:24.695262   38858 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m02/id_rsa Username:docker}
	I0829 19:34:24.779365   38858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:34:24.798471   38858 kubeconfig.go:125] found "ha-505269" server: "https://192.168.39.254:8443"
	I0829 19:34:24.798499   38858 api_server.go:166] Checking apiserver status ...
	I0829 19:34:24.798558   38858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:24.812936   38858 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1441/cgroup
	W0829 19:34:24.823907   38858 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1441/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:34:24.823961   38858 ssh_runner.go:195] Run: ls
	I0829 19:34:24.828491   38858 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 19:34:24.833528   38858 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 19:34:24.833547   38858 status.go:422] ha-505269-m02 apiserver status = Running (err=<nil>)
	I0829 19:34:24.833555   38858 status.go:257] ha-505269-m02 status: &{Name:ha-505269-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:34:24.833568   38858 status.go:255] checking status of ha-505269-m04 ...
	I0829 19:34:24.833872   38858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:34:24.833907   38858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:34:24.848591   38858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44759
	I0829 19:34:24.849047   38858 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:34:24.849556   38858 main.go:141] libmachine: Using API Version  1
	I0829 19:34:24.849584   38858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:34:24.849915   38858 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:34:24.850083   38858 main.go:141] libmachine: (ha-505269-m04) Calling .GetState
	I0829 19:34:24.851564   38858 status.go:330] ha-505269-m04 host status = "Running" (err=<nil>)
	I0829 19:34:24.851579   38858 host.go:66] Checking if "ha-505269-m04" exists ...
	I0829 19:34:24.851864   38858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:34:24.851905   38858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:34:24.868406   38858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37219
	I0829 19:34:24.868819   38858 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:34:24.869309   38858 main.go:141] libmachine: Using API Version  1
	I0829 19:34:24.869326   38858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:34:24.869644   38858 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:34:24.869865   38858 main.go:141] libmachine: (ha-505269-m04) Calling .GetIP
	I0829 19:34:24.873147   38858 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:34:24.873557   38858 main.go:141] libmachine: (ha-505269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:46:e7", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:31:51 +0000 UTC Type:0 Mac:52:54:00:44:46:e7 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-505269-m04 Clientid:01:52:54:00:44:46:e7}
	I0829 19:34:24.873588   38858 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined IP address 192.168.39.101 and MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:34:24.873790   38858 host.go:66] Checking if "ha-505269-m04" exists ...
	I0829 19:34:24.874098   38858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:34:24.874139   38858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:34:24.890264   38858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43957
	I0829 19:34:24.890746   38858 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:34:24.891178   38858 main.go:141] libmachine: Using API Version  1
	I0829 19:34:24.891192   38858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:34:24.891512   38858 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:34:24.891696   38858 main.go:141] libmachine: (ha-505269-m04) Calling .DriverName
	I0829 19:34:24.891900   38858 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:34:24.891929   38858 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHHostname
	I0829 19:34:24.894945   38858 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:34:24.895420   38858 main.go:141] libmachine: (ha-505269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:46:e7", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:31:51 +0000 UTC Type:0 Mac:52:54:00:44:46:e7 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-505269-m04 Clientid:01:52:54:00:44:46:e7}
	I0829 19:34:24.895448   38858 main.go:141] libmachine: (ha-505269-m04) DBG | domain ha-505269-m04 has defined IP address 192.168.39.101 and MAC address 52:54:00:44:46:e7 in network mk-ha-505269
	I0829 19:34:24.895627   38858 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHPort
	I0829 19:34:24.895796   38858 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHKeyPath
	I0829 19:34:24.895933   38858 main.go:141] libmachine: (ha-505269-m04) Calling .GetSSHUsername
	I0829 19:34:24.896059   38858 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269-m04/id_rsa Username:docker}
	W0829 19:34:43.298710   38858 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.101:22: connect: no route to host
	W0829 19:34:43.298778   38858 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	E0829 19:34:43.298794   38858 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0829 19:34:43.298804   38858 status.go:257] ha-505269-m04 status: &{Name:ha-505269-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0829 19:34:43.298827   38858 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-505269 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-505269 -n ha-505269
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-505269 logs -n 25: (1.74098419s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-505269 ssh -n ha-505269-m02 sudo cat                                          | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | /home/docker/cp-test_ha-505269-m03_ha-505269-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-505269 cp ha-505269-m03:/home/docker/cp-test.txt                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m04:/home/docker/cp-test_ha-505269-m03_ha-505269-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n ha-505269-m04 sudo cat                                          | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | /home/docker/cp-test_ha-505269-m03_ha-505269-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-505269 cp testdata/cp-test.txt                                                | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-505269 cp ha-505269-m04:/home/docker/cp-test.txt                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3454359662/001/cp-test_ha-505269-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-505269 cp ha-505269-m04:/home/docker/cp-test.txt                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269:/home/docker/cp-test_ha-505269-m04_ha-505269.txt                       |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n ha-505269 sudo cat                                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | /home/docker/cp-test_ha-505269-m04_ha-505269.txt                                 |           |         |         |                     |                     |
	| cp      | ha-505269 cp ha-505269-m04:/home/docker/cp-test.txt                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m02:/home/docker/cp-test_ha-505269-m04_ha-505269-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n ha-505269-m02 sudo cat                                          | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | /home/docker/cp-test_ha-505269-m04_ha-505269-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-505269 cp ha-505269-m04:/home/docker/cp-test.txt                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m03:/home/docker/cp-test_ha-505269-m04_ha-505269-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n                                                                 | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | ha-505269-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-505269 ssh -n ha-505269-m03 sudo cat                                          | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | /home/docker/cp-test_ha-505269-m04_ha-505269-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-505269 node stop m02 -v=7                                                     | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-505269 node start m02 -v=7                                                    | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-505269 -v=7                                                           | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-505269 -v=7                                                                | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-505269 --wait=true -v=7                                                    | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:32 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-505269                                                                | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:32 UTC |                     |
	| node    | ha-505269 node delete m03 -v=7                                                   | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:32 UTC | 29 Aug 24 19:32 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-505269 stop -v=7                                                              | ha-505269 | jenkins | v1.33.1 | 29 Aug 24 19:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 19:27:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 19:27:34.156723   36600 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:27:34.156849   36600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:27:34.156859   36600 out.go:358] Setting ErrFile to fd 2...
	I0829 19:27:34.156865   36600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:27:34.157046   36600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 19:27:34.157600   36600 out.go:352] Setting JSON to false
	I0829 19:27:34.158498   36600 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4201,"bootTime":1724955453,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 19:27:34.158576   36600 start.go:139] virtualization: kvm guest
	I0829 19:27:34.160548   36600 out.go:177] * [ha-505269] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 19:27:34.162233   36600 notify.go:220] Checking for updates...
	I0829 19:27:34.162248   36600 out.go:177]   - MINIKUBE_LOCATION=19530
	I0829 19:27:34.163419   36600 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:27:34.164653   36600 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 19:27:34.166027   36600 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 19:27:34.167349   36600 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:27:34.168737   36600 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:27:34.170463   36600 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:27:34.170596   36600 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:27:34.171032   36600 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:27:34.171080   36600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:27:34.186487   36600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33113
	I0829 19:27:34.186937   36600 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:27:34.187523   36600 main.go:141] libmachine: Using API Version  1
	I0829 19:27:34.187552   36600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:27:34.187851   36600 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:27:34.188053   36600 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:27:34.224008   36600 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 19:27:34.225226   36600 start.go:297] selected driver: kvm2
	I0829 19:27:34.225241   36600 start.go:901] validating driver "kvm2" against &{Name:ha-505269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-505269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.178 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.101 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:27:34.225434   36600 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:27:34.225965   36600 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:27:34.226074   36600 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19530-11185/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 19:27:34.241238   36600 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 19:27:34.241959   36600 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:27:34.242031   36600 cni.go:84] Creating CNI manager for ""
	I0829 19:27:34.242046   36600 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0829 19:27:34.242110   36600 start.go:340] cluster config:
	{Name:ha-505269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-505269 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.178 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.101 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:27:34.242258   36600 iso.go:125] acquiring lock: {Name:mk1c9d3ac7f423dd4657884e37bdf4359f6328d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:27:34.244817   36600 out.go:177] * Starting "ha-505269" primary control-plane node in "ha-505269" cluster
	I0829 19:27:34.246083   36600 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:27:34.246116   36600 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 19:27:34.246128   36600 cache.go:56] Caching tarball of preloaded images
	I0829 19:27:34.246248   36600 preload.go:172] Found /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 19:27:34.246260   36600 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 19:27:34.246412   36600 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/config.json ...
	I0829 19:27:34.246712   36600 start.go:360] acquireMachinesLock for ha-505269: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:27:34.246764   36600 start.go:364] duration metric: took 32.038µs to acquireMachinesLock for "ha-505269"
	I0829 19:27:34.246785   36600 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:27:34.246792   36600 fix.go:54] fixHost starting: 
	I0829 19:27:34.247071   36600 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:27:34.247103   36600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:27:34.261275   36600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46881
	I0829 19:27:34.261670   36600 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:27:34.262094   36600 main.go:141] libmachine: Using API Version  1
	I0829 19:27:34.262113   36600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:27:34.262415   36600 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:27:34.262584   36600 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:27:34.262728   36600 main.go:141] libmachine: (ha-505269) Calling .GetState
	I0829 19:27:34.264285   36600 fix.go:112] recreateIfNeeded on ha-505269: state=Running err=<nil>
	W0829 19:27:34.264306   36600 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:27:34.266085   36600 out.go:177] * Updating the running kvm2 "ha-505269" VM ...
	I0829 19:27:34.267235   36600 machine.go:93] provisionDockerMachine start ...
	I0829 19:27:34.267255   36600 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:27:34.267446   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:27:34.269923   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:27:34.270388   36600 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:27:34.270414   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:27:34.270481   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:27:34.270644   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:27:34.270772   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:27:34.270886   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:27:34.271027   36600 main.go:141] libmachine: Using SSH client type: native
	I0829 19:27:34.271246   36600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0829 19:27:34.271264   36600 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:27:34.389813   36600 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-505269
	
	I0829 19:27:34.389845   36600 main.go:141] libmachine: (ha-505269) Calling .GetMachineName
	I0829 19:27:34.390128   36600 buildroot.go:166] provisioning hostname "ha-505269"
	I0829 19:27:34.390150   36600 main.go:141] libmachine: (ha-505269) Calling .GetMachineName
	I0829 19:27:34.390333   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:27:34.393198   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:27:34.393649   36600 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:27:34.393671   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:27:34.393833   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:27:34.394025   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:27:34.394277   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:27:34.394433   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:27:34.394642   36600 main.go:141] libmachine: Using SSH client type: native
	I0829 19:27:34.394793   36600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0829 19:27:34.394804   36600 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-505269 && echo "ha-505269" | sudo tee /etc/hostname
	I0829 19:27:34.529979   36600 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-505269
	
	I0829 19:27:34.530010   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:27:34.532863   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:27:34.533198   36600 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:27:34.533231   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:27:34.533401   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:27:34.533601   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:27:34.533788   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:27:34.533951   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:27:34.534151   36600 main.go:141] libmachine: Using SSH client type: native
	I0829 19:27:34.534369   36600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0829 19:27:34.534391   36600 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-505269' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-505269/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-505269' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:27:34.667723   36600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:27:34.667758   36600 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 19:27:34.667781   36600 buildroot.go:174] setting up certificates
	I0829 19:27:34.667790   36600 provision.go:84] configureAuth start
	I0829 19:27:34.667814   36600 main.go:141] libmachine: (ha-505269) Calling .GetMachineName
	I0829 19:27:34.668122   36600 main.go:141] libmachine: (ha-505269) Calling .GetIP
	I0829 19:27:34.670548   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:27:34.670933   36600 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:27:34.670961   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:27:34.671144   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:27:34.673229   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:27:34.673568   36600 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:27:34.673590   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:27:34.673717   36600 provision.go:143] copyHostCerts
	I0829 19:27:34.673742   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 19:27:34.673776   36600 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 19:27:34.673791   36600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 19:27:34.673873   36600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 19:27:34.673950   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 19:27:34.673967   36600 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 19:27:34.673973   36600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 19:27:34.673999   36600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 19:27:34.674037   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 19:27:34.674052   36600 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 19:27:34.674058   36600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 19:27:34.674078   36600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 19:27:34.674129   36600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.ha-505269 san=[127.0.0.1 192.168.39.56 ha-505269 localhost minikube]
	I0829 19:27:34.783941   36600 provision.go:177] copyRemoteCerts
	I0829 19:27:34.783990   36600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:27:34.784009   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:27:34.786636   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:27:34.786986   36600 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:27:34.787011   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:27:34.787211   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:27:34.787382   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:27:34.787519   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:27:34.787629   36600 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:27:34.877523   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0829 19:27:34.877591   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0829 19:27:34.904867   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0829 19:27:34.904945   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 19:27:34.931967   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0829 19:27:34.932034   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 19:27:34.959886   36600 provision.go:87] duration metric: took 292.081861ms to configureAuth
	I0829 19:27:34.959917   36600 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:27:34.960210   36600 config.go:182] Loaded profile config "ha-505269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:27:34.960288   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:27:34.962964   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:27:34.963312   36600 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:27:34.963350   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:27:34.963530   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:27:34.963712   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:27:34.963866   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:27:34.963977   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:27:34.964109   36600 main.go:141] libmachine: Using SSH client type: native
	I0829 19:27:34.964307   36600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0829 19:27:34.964332   36600 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:29:05.699502   36600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:29:05.699530   36600 machine.go:96] duration metric: took 1m31.432282179s to provisionDockerMachine
	I0829 19:29:05.699542   36600 start.go:293] postStartSetup for "ha-505269" (driver="kvm2")
	I0829 19:29:05.699554   36600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:29:05.699579   36600 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:29:05.699956   36600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:29:05.700009   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:29:05.702886   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:29:05.703330   36600 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:29:05.703357   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:29:05.703515   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:29:05.703694   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:29:05.703912   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:29:05.704047   36600 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:29:05.794293   36600 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:29:05.798519   36600 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:29:05.798550   36600 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 19:29:05.798609   36600 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 19:29:05.798709   36600 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 19:29:05.798720   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> /etc/ssl/certs/183612.pem
	I0829 19:29:05.798824   36600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:29:05.807793   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 19:29:05.831937   36600 start.go:296] duration metric: took 132.38193ms for postStartSetup
	I0829 19:29:05.831977   36600 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:29:05.832300   36600 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0829 19:29:05.832324   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:29:05.835127   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:29:05.835516   36600 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:29:05.835542   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:29:05.835716   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:29:05.835895   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:29:05.836033   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:29:05.836169   36600 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	W0829 19:29:05.929415   36600 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0829 19:29:05.929437   36600 fix.go:56] duration metric: took 1m31.68264588s for fixHost
	I0829 19:29:05.929457   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:29:05.931990   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:29:05.932335   36600 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:29:05.932372   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:29:05.932534   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:29:05.932718   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:29:05.932881   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:29:05.933018   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:29:05.933178   36600 main.go:141] libmachine: Using SSH client type: native
	I0829 19:29:05.933375   36600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0829 19:29:05.933386   36600 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:29:06.047355   36600 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724959746.015956151
	
	I0829 19:29:06.047385   36600 fix.go:216] guest clock: 1724959746.015956151
	I0829 19:29:06.047397   36600 fix.go:229] Guest: 2024-08-29 19:29:06.015956151 +0000 UTC Remote: 2024-08-29 19:29:05.929444262 +0000 UTC m=+91.807712998 (delta=86.511889ms)
	I0829 19:29:06.047423   36600 fix.go:200] guest clock delta is within tolerance: 86.511889ms
	I0829 19:29:06.047431   36600 start.go:83] releasing machines lock for "ha-505269", held for 1m31.800652132s
	I0829 19:29:06.047459   36600 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:29:06.047699   36600 main.go:141] libmachine: (ha-505269) Calling .GetIP
	I0829 19:29:06.050421   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:29:06.050765   36600 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:29:06.050792   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:29:06.050941   36600 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:29:06.051461   36600 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:29:06.051592   36600 main.go:141] libmachine: (ha-505269) Calling .DriverName
	I0829 19:29:06.051704   36600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:29:06.051751   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:29:06.051781   36600 ssh_runner.go:195] Run: cat /version.json
	I0829 19:29:06.051804   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHHostname
	I0829 19:29:06.054178   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:29:06.054611   36600 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:29:06.054637   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:29:06.054654   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:29:06.054758   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:29:06.054923   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:29:06.055056   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:29:06.055091   36600 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:29:06.055110   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:29:06.055337   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHPort
	I0829 19:29:06.055356   36600 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:29:06.055458   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHKeyPath
	I0829 19:29:06.055586   36600 main.go:141] libmachine: (ha-505269) Calling .GetSSHUsername
	I0829 19:29:06.055776   36600 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/ha-505269/id_rsa Username:docker}
	I0829 19:29:06.135564   36600 ssh_runner.go:195] Run: systemctl --version
	I0829 19:29:06.163034   36600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:29:06.322676   36600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:29:06.332884   36600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:29:06.332941   36600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:29:06.341985   36600 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0829 19:29:06.342007   36600 start.go:495] detecting cgroup driver to use...
	I0829 19:29:06.342065   36600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:29:06.358776   36600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:29:06.372459   36600 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:29:06.372514   36600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:29:06.386054   36600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:29:06.400023   36600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:29:06.554184   36600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:29:06.698310   36600 docker.go:233] disabling docker service ...
	I0829 19:29:06.698400   36600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:29:06.718341   36600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:29:06.732086   36600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:29:06.879309   36600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:29:07.027409   36600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:29:07.044803   36600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:29:07.066647   36600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:29:07.066710   36600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:29:07.079187   36600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:29:07.079243   36600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:29:07.091814   36600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:29:07.103966   36600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:29:07.114788   36600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:29:07.125798   36600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:29:07.137276   36600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:29:07.148057   36600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:29:07.158962   36600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:29:07.168481   36600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:29:07.178254   36600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:29:07.323671   36600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:29:07.554373   36600 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:29:07.554441   36600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:29:07.562497   36600 start.go:563] Will wait 60s for crictl version
	I0829 19:29:07.562574   36600 ssh_runner.go:195] Run: which crictl
	I0829 19:29:07.566461   36600 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:29:07.611735   36600 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:29:07.611819   36600 ssh_runner.go:195] Run: crio --version
	I0829 19:29:07.647665   36600 ssh_runner.go:195] Run: crio --version
	I0829 19:29:07.682763   36600 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:29:07.684227   36600 main.go:141] libmachine: (ha-505269) Calling .GetIP
	I0829 19:29:07.687101   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:29:07.687526   36600 main.go:141] libmachine: (ha-505269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:63:25", ip: ""} in network mk-ha-505269: {Iface:virbr1 ExpiryTime:2024-08-29 20:17:42 +0000 UTC Type:0 Mac:52:54:00:5e:63:25 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-505269 Clientid:01:52:54:00:5e:63:25}
	I0829 19:29:07.687552   36600 main.go:141] libmachine: (ha-505269) DBG | domain ha-505269 has defined IP address 192.168.39.56 and MAC address 52:54:00:5e:63:25 in network mk-ha-505269
	I0829 19:29:07.687760   36600 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 19:29:07.692546   36600 kubeadm.go:883] updating cluster {Name:ha-505269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-505269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.178 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.101 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:29:07.692719   36600 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:29:07.692769   36600 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:29:07.740649   36600 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:29:07.740670   36600 crio.go:433] Images already preloaded, skipping extraction
	I0829 19:29:07.740713   36600 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:29:07.778175   36600 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:29:07.778204   36600 cache_images.go:84] Images are preloaded, skipping loading
	I0829 19:29:07.778219   36600 kubeadm.go:934] updating node { 192.168.39.56 8443 v1.31.0 crio true true} ...
	I0829 19:29:07.778320   36600 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-505269 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-505269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:29:07.778382   36600 ssh_runner.go:195] Run: crio config
	I0829 19:29:07.837416   36600 cni.go:84] Creating CNI manager for ""
	I0829 19:29:07.837433   36600 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0829 19:29:07.837443   36600 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:29:07.837463   36600 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.56 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-505269 NodeName:ha-505269 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:29:07.837582   36600 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.56
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-505269"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.56
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.56"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:29:07.837605   36600 kube-vip.go:115] generating kube-vip config ...
	I0829 19:29:07.837642   36600 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0829 19:29:07.851620   36600 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0829 19:29:07.851701   36600 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0829 19:29:07.851748   36600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:29:07.863698   36600 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:29:07.863758   36600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0829 19:29:07.874878   36600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0829 19:29:07.893661   36600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:29:07.912030   36600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0829 19:29:07.930716   36600 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0829 19:29:07.949391   36600 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0829 19:29:07.955001   36600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:29:08.115922   36600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:29:08.131455   36600 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269 for IP: 192.168.39.56
	I0829 19:29:08.131477   36600 certs.go:194] generating shared ca certs ...
	I0829 19:29:08.131494   36600 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:29:08.131647   36600 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 19:29:08.131719   36600 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 19:29:08.131733   36600 certs.go:256] generating profile certs ...
	I0829 19:29:08.131827   36600 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/client.key
	I0829 19:29:08.131861   36600 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.94223113
	I0829 19:29:08.131882   36600 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.94223113 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.56 192.168.39.68 192.168.39.178 192.168.39.254]
	I0829 19:29:08.191691   36600 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.94223113 ...
	I0829 19:29:08.191729   36600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.94223113: {Name:mkabb73b87dfcf8f7a0f84868f46b5d3ff79bef1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:29:08.191916   36600 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.94223113 ...
	I0829 19:29:08.191933   36600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.94223113: {Name:mk3c718e9f0565ec73d9d975ba744e6c0d2fc82e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:29:08.192027   36600 certs.go:381] copying /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt.94223113 -> /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt
	I0829 19:29:08.192210   36600 certs.go:385] copying /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key.94223113 -> /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key
	I0829 19:29:08.192375   36600 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.key
	I0829 19:29:08.192391   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0829 19:29:08.192409   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0829 19:29:08.192425   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0829 19:29:08.192449   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0829 19:29:08.192469   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0829 19:29:08.192488   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0829 19:29:08.192510   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0829 19:29:08.192528   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0829 19:29:08.192635   36600 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 19:29:08.192685   36600 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 19:29:08.192700   36600 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 19:29:08.192737   36600 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 19:29:08.192771   36600 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:29:08.192801   36600 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 19:29:08.192859   36600 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 19:29:08.192899   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem -> /usr/share/ca-certificates/18361.pem
	I0829 19:29:08.192919   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> /usr/share/ca-certificates/183612.pem
	I0829 19:29:08.192938   36600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:29:08.193475   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:29:08.219502   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 19:29:08.244853   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:29:08.268655   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:29:08.293993   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0829 19:29:08.318182   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 19:29:08.342186   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:29:08.366997   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/ha-505269/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:29:08.391509   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 19:29:08.415580   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 19:29:08.440219   36600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:29:08.464486   36600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:29:08.480957   36600 ssh_runner.go:195] Run: openssl version
	I0829 19:29:08.486815   36600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 19:29:08.497254   36600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 19:29:08.501791   36600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 19:29:08.501850   36600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 19:29:08.507608   36600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 19:29:08.516752   36600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 19:29:08.527128   36600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 19:29:08.531511   36600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 19:29:08.531559   36600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 19:29:08.537309   36600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:29:08.546501   36600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:29:08.558230   36600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:29:08.562860   36600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:29:08.562917   36600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:29:08.568573   36600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:29:08.578091   36600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:29:08.582888   36600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:29:08.588740   36600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:29:08.594682   36600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:29:08.600929   36600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:29:08.606663   36600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:29:08.612797   36600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:29:08.618384   36600 kubeadm.go:392] StartCluster: {Name:ha-505269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-505269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.178 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.101 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:29:08.618492   36600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:29:08.618555   36600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:29:08.655913   36600 cri.go:89] found id: "f00fd6c41ac07df2a221d4331e2b3dccd2e358c7d1676f6f7bdd1ed7b4cf5d4b"
	I0829 19:29:08.655932   36600 cri.go:89] found id: "f85180dc495341fe79bbb2234d464d91f1685b07ea2bee68a830140b5cd771fa"
	I0829 19:29:08.655936   36600 cri.go:89] found id: "32266f173333a67786c0fd1a3c4a4730db0b274c0b431830328de92ff6bd09b8"
	I0829 19:29:08.655938   36600 cri.go:89] found id: "7c1f1381650d141915ee7ad8a0e9d8363da549a9fdbb60a03bda17e169672eb1"
	I0829 19:29:08.655941   36600 cri.go:89] found id: "29d7e6c72fdaaad5da65548ec44c18b60db3921c4ad2b63c9d767f1cc2c3fa75"
	I0829 19:29:08.655944   36600 cri.go:89] found id: "1bc1a33f68ce7f972b7d1dd4dc36ce496d6339b1dc6a351d35cbb255ed61e8bd"
	I0829 19:29:08.655946   36600 cri.go:89] found id: "f5e9dd792be09a6e1909afd5dca42b303bac8476ef04f3254480b0f21ac53604"
	I0829 19:29:08.655949   36600 cri.go:89] found id: "9b0cc96d9477c950d00d6920b1c4466bfd2e75dc0980383cdf073b75f577c37c"
	I0829 19:29:08.655951   36600 cri.go:89] found id: "066b1cbdd3861a8cf28333b764086e47ace1987acec778ff2b2d0aa1973af37a"
	I0829 19:29:08.655956   36600 cri.go:89] found id: "52fd2d668a925e55c334400b5d61dde55232d771983a9073e97f0ff37a6062fc"
	I0829 19:29:08.655959   36600 cri.go:89] found id: "960e616b3c0582c8ccb0ec4fea8d02c0e2bfd2d0f81a273ac203cbf5eb6e4d6c"
	I0829 19:29:08.655975   36600 cri.go:89] found id: "d1f91ce133bedd34081facf255eed45036eacfd938fbf58545d302a01d638dc0"
	I0829 19:29:08.655980   36600 cri.go:89] found id: "65b2531e5990db96ea59b71af6b9036995e91f182477ba5d381b96877d38e296"
	I0829 19:29:08.655983   36600 cri.go:89] found id: ""
	I0829 19:29:08.656032   36600 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 29 19:34:43 ha-505269 crio[3719]: time="2024-08-29 19:34:43.918038971Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960083918012384,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6f0f098-b16a-4b90-8b6c-a0e680e40906 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:34:43 ha-505269 crio[3719]: time="2024-08-29 19:34:43.918506778Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22139910-adbc-4333-830e-8ec29e246937 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:34:43 ha-505269 crio[3719]: time="2024-08-29 19:34:43.918576041Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22139910-adbc-4333-830e-8ec29e246937 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:34:43 ha-505269 crio[3719]: time="2024-08-29 19:34:43.919254666Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6a1f0f0043f79c5370dc7e5b665727737742deca0f382bca2810647b1c9c6443,PodSandboxId:d2d78cd683b75429a8fb19a8d336313277432fb28b10e5755db7a3f0044414b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724959926378655727,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7cd00a-94da-4e42-b7ae-289aab759c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24eca9daeddb01201f9b34221347d5cb309931c118348968da060d6c1bf376c,PodSandboxId:2ba432bbcf20f2894079e374a21869e4c3b459312f9dfc9aa495e6ee0daa8237,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724959819364656134,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840e9d9d59afee1514ac6551d154c955,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b090b02a2f6af71741e9e17cccfeb77b790febb24ce1fb5ec191d971424b5378,PodSandboxId:2efeb7211db1abde8ccda092af846f805b61900f51b39ffc9a91cd168b949318,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724959798361283228,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82b78fe84e206c02eb995f9d886b23c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01f810a9780487cf942a0a39907de8a94f02a13ddcaa83a795f9632463c9f407,PodSandboxId:4330e2b71d9ee5233683cbf4c7d7de37788b9165bd248254bd022ecd32a0430d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724959785684229787,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-psss7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69c11597-6cac-437a-9860-fc1a66cdc304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39bb7e864296eb98a7fa1d4cf9729fd6fe8a035d7acdcb8024f1172b92b20424,PodSandboxId:2ba432bbcf20f2894079e374a21869e4c3b459312f9dfc9aa495e6ee0daa8237,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724959785014940911,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840e9d9d59afee1514ac6551d154c955,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c3c9479f487a0ae9282327db1e1b6fa1a3f31f7ef4354f10083d6f93334aa13,PodSandboxId:3506c46b68a6dd7a917e5603d36b8df5d715969c33b218c84522da041fa4ba35,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724959768134530987,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf717028309b4303be9380ab50d8f89a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de469f77c1414c9aeb375f6fab95f9bf73d297cf39f8e3032d44a27fce129160,PodSandboxId:d2d78cd683b75429a8fb19a8d336313277432fb28b10e5755db7a3f0044414b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724959753005180580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7cd00a-94da-4e42-b7ae-289aab759c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:392a8fdf39b58ef48526e9b0ee737518780d384adbf9a00f6d58e386f06aca86,PodSandboxId:feea7addc66c506d27e991c62c20bda17a854abc868774c641d9459da7c0a1f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959752578046557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bqqq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801d9cfa-e1ad-4b31-9803-0030543fdc9e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3f6edccac07af4c68e2df70de1901a354b7e466a6480aa00a8c242372a1489,PodSandboxId:01947ff9720b96eec71aef069d1e00630167d535f0d83a9d399723674f8fc2e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724959752426178642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2658d81c7919220d900309ffd29970c4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.conta
iner.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ab99da1b1a7e8939462b754993d6616d0a80dc4262fe6f7d4d69c339c4a78c,PodSandboxId:d20310623d52ab5052b16cdf33028f557b5a6c2baf8a0a27a7a866b18ff8cace,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959752399739719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qjgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12168097-2d3c-467a-b4b5-c0ca7f85e4eb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",
\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d575c067854c198c0503704ca102d4bb9e9823a228df479b19a5ef172e162fc8,PodSandboxId:d42fe0859024616c82f93e163ef976e1501b5a23cca570ea6584dd321bdb15a1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724959752418561136,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7rp6z,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 7c922b32-e666-4b00-ab65-505632346112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46b7c110f2f7341eb54db4c08f1a782899e9205e069ca96dfbf295e38d3fc601,PodSandboxId:1cd9911d50578a09d9516f4ec4c519c25999741911a9df06637349a98a8b5dac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724959752194158517,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e88a504e-122b-4609-a0cc-4ad3115b3e4e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48ea370e88590ef75080636cd247905edf96c5c382c3fee2a7cf22b8c428407c,PodSandboxId:2efeb7211db1abde8ccda092af846f805b61900f51b39ffc9a91cd168b949318,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724959752332592108,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82b78fe84e206c02e
b995f9d886b23c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee8547993a84026cfd6c27f9f068228b1ea619a9dee19e99e3b8d07b22b23584,PodSandboxId:60334f5771834dbdd2d31990ab7c7f4f3d2b23cb4846dd94949e33ab8f9dd2d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724959752074925751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceba94f2170a08ee5a3d92beb3c9ffca,},Anno
tations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed600112468d9762d0ad5c0e554ca486c5cfc592114271d25cd84f5c09187db,PodSandboxId:02692297ba9f1af58d950a27f62a82af78a2a057750241400caef23a8bf2b2b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724959256520864398,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-psss7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69c11597-6cac-437a-9860-fc1a66cdc304,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29d7e6c72fdaaad5da65548ec44c18b60db3921c4ad2b63c9d767f1cc2c3fa75,PodSandboxId:f43b8211e2c7357fd7c4a4db43182c72f928a273075ca6fe8b80aedc84c67fac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959112076742735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qjgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12168097-2d3c-467a-b4b5-c0ca7f85e4eb,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc1a33f68ce7f972b7d1dd4dc36ce496d6339b1dc6a351d35cbb255ed61e8bd,PodSandboxId:6b5276c7cbe294143915bdc30d76458310de02fd916991c7782efc2e33f3190b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959112071270130,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-bqqq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801d9cfa-e1ad-4b31-9803-0030543fdc9e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5e9dd792be09a6e1909afd5dca42b303bac8476ef04f3254480b0f21ac53604,PodSandboxId:1f6e4f500f959ede186c4a9f85bff38a45872b813a20d08db40395e528d840a1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724959100077712568,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7rp6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c922b32-e666-4b00-ab65-505632346112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0cc96d9477c950d00d6920b1c4466bfd2e75dc0980383cdf073b75f577c37c,PodSandboxId:303aabbca3328ef64356c8322b3e76f6f3d1d35af5f892a4bec46277b7c9dd3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724959097303124595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88a504e-122b-4609-a0cc-4ad3115b3e4e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52fd2d668a925e55c334400b5d61dde55232d771983a9073e97f0ff37a6062fc,PodSandboxId:9d45b7206f46e64c0d5913c20a97148e0ac19d155b7ce7d3e371c593e763c4d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724959085808573312,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2658d81c7919220d900309ffd29970c4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1f91ce133bedd34081facf255eed45036eacfd938fbf58545d302a01d638dc0,PodSandboxId:8cfa0a246feb47d168af1872d1544f0b93affe0730c14516b4367b835d78d328,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724959085673137503,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceba94f2170a08ee5a3d92beb3c9ffca,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22139910-adbc-4333-830e-8ec29e246937 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:34:43 ha-505269 crio[3719]: time="2024-08-29 19:34:43.962779961Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=afeee558-24cb-45f3-b447-77073f1117d2 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:34:43 ha-505269 crio[3719]: time="2024-08-29 19:34:43.962945429Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=afeee558-24cb-45f3-b447-77073f1117d2 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:34:43 ha-505269 crio[3719]: time="2024-08-29 19:34:43.964199277Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d6aa30ff-d722-4e98-aaac-5e146e989ff8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:34:43 ha-505269 crio[3719]: time="2024-08-29 19:34:43.964644743Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960083964621416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6aa30ff-d722-4e98-aaac-5e146e989ff8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:34:43 ha-505269 crio[3719]: time="2024-08-29 19:34:43.965192731Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1e4b112-19f3-4f05-bbfc-bc7381e67c90 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:34:43 ha-505269 crio[3719]: time="2024-08-29 19:34:43.965264881Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1e4b112-19f3-4f05-bbfc-bc7381e67c90 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:34:43 ha-505269 crio[3719]: time="2024-08-29 19:34:43.965699951Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6a1f0f0043f79c5370dc7e5b665727737742deca0f382bca2810647b1c9c6443,PodSandboxId:d2d78cd683b75429a8fb19a8d336313277432fb28b10e5755db7a3f0044414b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724959926378655727,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7cd00a-94da-4e42-b7ae-289aab759c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24eca9daeddb01201f9b34221347d5cb309931c118348968da060d6c1bf376c,PodSandboxId:2ba432bbcf20f2894079e374a21869e4c3b459312f9dfc9aa495e6ee0daa8237,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724959819364656134,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840e9d9d59afee1514ac6551d154c955,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b090b02a2f6af71741e9e17cccfeb77b790febb24ce1fb5ec191d971424b5378,PodSandboxId:2efeb7211db1abde8ccda092af846f805b61900f51b39ffc9a91cd168b949318,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724959798361283228,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82b78fe84e206c02eb995f9d886b23c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01f810a9780487cf942a0a39907de8a94f02a13ddcaa83a795f9632463c9f407,PodSandboxId:4330e2b71d9ee5233683cbf4c7d7de37788b9165bd248254bd022ecd32a0430d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724959785684229787,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-psss7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69c11597-6cac-437a-9860-fc1a66cdc304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39bb7e864296eb98a7fa1d4cf9729fd6fe8a035d7acdcb8024f1172b92b20424,PodSandboxId:2ba432bbcf20f2894079e374a21869e4c3b459312f9dfc9aa495e6ee0daa8237,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724959785014940911,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840e9d9d59afee1514ac6551d154c955,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c3c9479f487a0ae9282327db1e1b6fa1a3f31f7ef4354f10083d6f93334aa13,PodSandboxId:3506c46b68a6dd7a917e5603d36b8df5d715969c33b218c84522da041fa4ba35,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724959768134530987,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf717028309b4303be9380ab50d8f89a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de469f77c1414c9aeb375f6fab95f9bf73d297cf39f8e3032d44a27fce129160,PodSandboxId:d2d78cd683b75429a8fb19a8d336313277432fb28b10e5755db7a3f0044414b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724959753005180580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7cd00a-94da-4e42-b7ae-289aab759c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:392a8fdf39b58ef48526e9b0ee737518780d384adbf9a00f6d58e386f06aca86,PodSandboxId:feea7addc66c506d27e991c62c20bda17a854abc868774c641d9459da7c0a1f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959752578046557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bqqq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801d9cfa-e1ad-4b31-9803-0030543fdc9e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3f6edccac07af4c68e2df70de1901a354b7e466a6480aa00a8c242372a1489,PodSandboxId:01947ff9720b96eec71aef069d1e00630167d535f0d83a9d399723674f8fc2e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724959752426178642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2658d81c7919220d900309ffd29970c4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.conta
iner.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ab99da1b1a7e8939462b754993d6616d0a80dc4262fe6f7d4d69c339c4a78c,PodSandboxId:d20310623d52ab5052b16cdf33028f557b5a6c2baf8a0a27a7a866b18ff8cace,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959752399739719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qjgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12168097-2d3c-467a-b4b5-c0ca7f85e4eb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",
\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d575c067854c198c0503704ca102d4bb9e9823a228df479b19a5ef172e162fc8,PodSandboxId:d42fe0859024616c82f93e163ef976e1501b5a23cca570ea6584dd321bdb15a1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724959752418561136,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7rp6z,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 7c922b32-e666-4b00-ab65-505632346112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46b7c110f2f7341eb54db4c08f1a782899e9205e069ca96dfbf295e38d3fc601,PodSandboxId:1cd9911d50578a09d9516f4ec4c519c25999741911a9df06637349a98a8b5dac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724959752194158517,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e88a504e-122b-4609-a0cc-4ad3115b3e4e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48ea370e88590ef75080636cd247905edf96c5c382c3fee2a7cf22b8c428407c,PodSandboxId:2efeb7211db1abde8ccda092af846f805b61900f51b39ffc9a91cd168b949318,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724959752332592108,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82b78fe84e206c02e
b995f9d886b23c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee8547993a84026cfd6c27f9f068228b1ea619a9dee19e99e3b8d07b22b23584,PodSandboxId:60334f5771834dbdd2d31990ab7c7f4f3d2b23cb4846dd94949e33ab8f9dd2d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724959752074925751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceba94f2170a08ee5a3d92beb3c9ffca,},Anno
tations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed600112468d9762d0ad5c0e554ca486c5cfc592114271d25cd84f5c09187db,PodSandboxId:02692297ba9f1af58d950a27f62a82af78a2a057750241400caef23a8bf2b2b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724959256520864398,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-psss7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69c11597-6cac-437a-9860-fc1a66cdc304,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29d7e6c72fdaaad5da65548ec44c18b60db3921c4ad2b63c9d767f1cc2c3fa75,PodSandboxId:f43b8211e2c7357fd7c4a4db43182c72f928a273075ca6fe8b80aedc84c67fac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959112076742735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qjgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12168097-2d3c-467a-b4b5-c0ca7f85e4eb,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc1a33f68ce7f972b7d1dd4dc36ce496d6339b1dc6a351d35cbb255ed61e8bd,PodSandboxId:6b5276c7cbe294143915bdc30d76458310de02fd916991c7782efc2e33f3190b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959112071270130,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-bqqq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801d9cfa-e1ad-4b31-9803-0030543fdc9e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5e9dd792be09a6e1909afd5dca42b303bac8476ef04f3254480b0f21ac53604,PodSandboxId:1f6e4f500f959ede186c4a9f85bff38a45872b813a20d08db40395e528d840a1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724959100077712568,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7rp6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c922b32-e666-4b00-ab65-505632346112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0cc96d9477c950d00d6920b1c4466bfd2e75dc0980383cdf073b75f577c37c,PodSandboxId:303aabbca3328ef64356c8322b3e76f6f3d1d35af5f892a4bec46277b7c9dd3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724959097303124595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88a504e-122b-4609-a0cc-4ad3115b3e4e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52fd2d668a925e55c334400b5d61dde55232d771983a9073e97f0ff37a6062fc,PodSandboxId:9d45b7206f46e64c0d5913c20a97148e0ac19d155b7ce7d3e371c593e763c4d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724959085808573312,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2658d81c7919220d900309ffd29970c4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1f91ce133bedd34081facf255eed45036eacfd938fbf58545d302a01d638dc0,PodSandboxId:8cfa0a246feb47d168af1872d1544f0b93affe0730c14516b4367b835d78d328,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724959085673137503,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceba94f2170a08ee5a3d92beb3c9ffca,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e1e4b112-19f3-4f05-bbfc-bc7381e67c90 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:34:44 ha-505269 crio[3719]: time="2024-08-29 19:34:44.009676061Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=38ed9133-1dfd-47ef-9154-cea3c422f1c2 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:34:44 ha-505269 crio[3719]: time="2024-08-29 19:34:44.009869238Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=38ed9133-1dfd-47ef-9154-cea3c422f1c2 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:34:44 ha-505269 crio[3719]: time="2024-08-29 19:34:44.011341678Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af8e6e97-5de3-4264-aff7-f7618cdd250c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:34:44 ha-505269 crio[3719]: time="2024-08-29 19:34:44.011795190Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960084011773640,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af8e6e97-5de3-4264-aff7-f7618cdd250c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:34:44 ha-505269 crio[3719]: time="2024-08-29 19:34:44.012367786Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9f63939f-0975-48a7-9802-3a8500d365e8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:34:44 ha-505269 crio[3719]: time="2024-08-29 19:34:44.012421466Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f63939f-0975-48a7-9802-3a8500d365e8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:34:44 ha-505269 crio[3719]: time="2024-08-29 19:34:44.012835838Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6a1f0f0043f79c5370dc7e5b665727737742deca0f382bca2810647b1c9c6443,PodSandboxId:d2d78cd683b75429a8fb19a8d336313277432fb28b10e5755db7a3f0044414b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724959926378655727,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7cd00a-94da-4e42-b7ae-289aab759c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24eca9daeddb01201f9b34221347d5cb309931c118348968da060d6c1bf376c,PodSandboxId:2ba432bbcf20f2894079e374a21869e4c3b459312f9dfc9aa495e6ee0daa8237,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724959819364656134,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840e9d9d59afee1514ac6551d154c955,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b090b02a2f6af71741e9e17cccfeb77b790febb24ce1fb5ec191d971424b5378,PodSandboxId:2efeb7211db1abde8ccda092af846f805b61900f51b39ffc9a91cd168b949318,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724959798361283228,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82b78fe84e206c02eb995f9d886b23c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01f810a9780487cf942a0a39907de8a94f02a13ddcaa83a795f9632463c9f407,PodSandboxId:4330e2b71d9ee5233683cbf4c7d7de37788b9165bd248254bd022ecd32a0430d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724959785684229787,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-psss7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69c11597-6cac-437a-9860-fc1a66cdc304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39bb7e864296eb98a7fa1d4cf9729fd6fe8a035d7acdcb8024f1172b92b20424,PodSandboxId:2ba432bbcf20f2894079e374a21869e4c3b459312f9dfc9aa495e6ee0daa8237,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724959785014940911,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840e9d9d59afee1514ac6551d154c955,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c3c9479f487a0ae9282327db1e1b6fa1a3f31f7ef4354f10083d6f93334aa13,PodSandboxId:3506c46b68a6dd7a917e5603d36b8df5d715969c33b218c84522da041fa4ba35,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724959768134530987,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf717028309b4303be9380ab50d8f89a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de469f77c1414c9aeb375f6fab95f9bf73d297cf39f8e3032d44a27fce129160,PodSandboxId:d2d78cd683b75429a8fb19a8d336313277432fb28b10e5755db7a3f0044414b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724959753005180580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7cd00a-94da-4e42-b7ae-289aab759c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:392a8fdf39b58ef48526e9b0ee737518780d384adbf9a00f6d58e386f06aca86,PodSandboxId:feea7addc66c506d27e991c62c20bda17a854abc868774c641d9459da7c0a1f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959752578046557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bqqq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801d9cfa-e1ad-4b31-9803-0030543fdc9e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3f6edccac07af4c68e2df70de1901a354b7e466a6480aa00a8c242372a1489,PodSandboxId:01947ff9720b96eec71aef069d1e00630167d535f0d83a9d399723674f8fc2e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724959752426178642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2658d81c7919220d900309ffd29970c4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.conta
iner.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ab99da1b1a7e8939462b754993d6616d0a80dc4262fe6f7d4d69c339c4a78c,PodSandboxId:d20310623d52ab5052b16cdf33028f557b5a6c2baf8a0a27a7a866b18ff8cace,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959752399739719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qjgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12168097-2d3c-467a-b4b5-c0ca7f85e4eb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",
\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d575c067854c198c0503704ca102d4bb9e9823a228df479b19a5ef172e162fc8,PodSandboxId:d42fe0859024616c82f93e163ef976e1501b5a23cca570ea6584dd321bdb15a1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724959752418561136,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7rp6z,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 7c922b32-e666-4b00-ab65-505632346112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46b7c110f2f7341eb54db4c08f1a782899e9205e069ca96dfbf295e38d3fc601,PodSandboxId:1cd9911d50578a09d9516f4ec4c519c25999741911a9df06637349a98a8b5dac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724959752194158517,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e88a504e-122b-4609-a0cc-4ad3115b3e4e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48ea370e88590ef75080636cd247905edf96c5c382c3fee2a7cf22b8c428407c,PodSandboxId:2efeb7211db1abde8ccda092af846f805b61900f51b39ffc9a91cd168b949318,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724959752332592108,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82b78fe84e206c02e
b995f9d886b23c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee8547993a84026cfd6c27f9f068228b1ea619a9dee19e99e3b8d07b22b23584,PodSandboxId:60334f5771834dbdd2d31990ab7c7f4f3d2b23cb4846dd94949e33ab8f9dd2d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724959752074925751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceba94f2170a08ee5a3d92beb3c9ffca,},Anno
tations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed600112468d9762d0ad5c0e554ca486c5cfc592114271d25cd84f5c09187db,PodSandboxId:02692297ba9f1af58d950a27f62a82af78a2a057750241400caef23a8bf2b2b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724959256520864398,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-psss7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69c11597-6cac-437a-9860-fc1a66cdc304,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29d7e6c72fdaaad5da65548ec44c18b60db3921c4ad2b63c9d767f1cc2c3fa75,PodSandboxId:f43b8211e2c7357fd7c4a4db43182c72f928a273075ca6fe8b80aedc84c67fac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959112076742735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qjgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12168097-2d3c-467a-b4b5-c0ca7f85e4eb,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc1a33f68ce7f972b7d1dd4dc36ce496d6339b1dc6a351d35cbb255ed61e8bd,PodSandboxId:6b5276c7cbe294143915bdc30d76458310de02fd916991c7782efc2e33f3190b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959112071270130,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-bqqq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801d9cfa-e1ad-4b31-9803-0030543fdc9e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5e9dd792be09a6e1909afd5dca42b303bac8476ef04f3254480b0f21ac53604,PodSandboxId:1f6e4f500f959ede186c4a9f85bff38a45872b813a20d08db40395e528d840a1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724959100077712568,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7rp6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c922b32-e666-4b00-ab65-505632346112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0cc96d9477c950d00d6920b1c4466bfd2e75dc0980383cdf073b75f577c37c,PodSandboxId:303aabbca3328ef64356c8322b3e76f6f3d1d35af5f892a4bec46277b7c9dd3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724959097303124595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88a504e-122b-4609-a0cc-4ad3115b3e4e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52fd2d668a925e55c334400b5d61dde55232d771983a9073e97f0ff37a6062fc,PodSandboxId:9d45b7206f46e64c0d5913c20a97148e0ac19d155b7ce7d3e371c593e763c4d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724959085808573312,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2658d81c7919220d900309ffd29970c4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1f91ce133bedd34081facf255eed45036eacfd938fbf58545d302a01d638dc0,PodSandboxId:8cfa0a246feb47d168af1872d1544f0b93affe0730c14516b4367b835d78d328,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724959085673137503,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceba94f2170a08ee5a3d92beb3c9ffca,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9f63939f-0975-48a7-9802-3a8500d365e8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:34:44 ha-505269 crio[3719]: time="2024-08-29 19:34:44.052907793Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f5e90374-8f67-49ed-9f5e-89fbae1c0f2c name=/runtime.v1.RuntimeService/Version
	Aug 29 19:34:44 ha-505269 crio[3719]: time="2024-08-29 19:34:44.053065709Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f5e90374-8f67-49ed-9f5e-89fbae1c0f2c name=/runtime.v1.RuntimeService/Version
	Aug 29 19:34:44 ha-505269 crio[3719]: time="2024-08-29 19:34:44.054883554Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a3c0b881-5432-4b75-b1c7-a0830be04cdf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:34:44 ha-505269 crio[3719]: time="2024-08-29 19:34:44.055423784Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960084055398322,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a3c0b881-5432-4b75-b1c7-a0830be04cdf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:34:44 ha-505269 crio[3719]: time="2024-08-29 19:34:44.056293786Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e9fab04d-33d8-4434-974e-373b5dbdccbd name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:34:44 ha-505269 crio[3719]: time="2024-08-29 19:34:44.056365148Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e9fab04d-33d8-4434-974e-373b5dbdccbd name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:34:44 ha-505269 crio[3719]: time="2024-08-29 19:34:44.056894957Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6a1f0f0043f79c5370dc7e5b665727737742deca0f382bca2810647b1c9c6443,PodSandboxId:d2d78cd683b75429a8fb19a8d336313277432fb28b10e5755db7a3f0044414b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724959926378655727,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7cd00a-94da-4e42-b7ae-289aab759c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24eca9daeddb01201f9b34221347d5cb309931c118348968da060d6c1bf376c,PodSandboxId:2ba432bbcf20f2894079e374a21869e4c3b459312f9dfc9aa495e6ee0daa8237,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724959819364656134,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840e9d9d59afee1514ac6551d154c955,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b090b02a2f6af71741e9e17cccfeb77b790febb24ce1fb5ec191d971424b5378,PodSandboxId:2efeb7211db1abde8ccda092af846f805b61900f51b39ffc9a91cd168b949318,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724959798361283228,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82b78fe84e206c02eb995f9d886b23c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01f810a9780487cf942a0a39907de8a94f02a13ddcaa83a795f9632463c9f407,PodSandboxId:4330e2b71d9ee5233683cbf4c7d7de37788b9165bd248254bd022ecd32a0430d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724959785684229787,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-psss7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69c11597-6cac-437a-9860-fc1a66cdc304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39bb7e864296eb98a7fa1d4cf9729fd6fe8a035d7acdcb8024f1172b92b20424,PodSandboxId:2ba432bbcf20f2894079e374a21869e4c3b459312f9dfc9aa495e6ee0daa8237,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724959785014940911,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840e9d9d59afee1514ac6551d154c955,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c3c9479f487a0ae9282327db1e1b6fa1a3f31f7ef4354f10083d6f93334aa13,PodSandboxId:3506c46b68a6dd7a917e5603d36b8df5d715969c33b218c84522da041fa4ba35,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724959768134530987,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf717028309b4303be9380ab50d8f89a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de469f77c1414c9aeb375f6fab95f9bf73d297cf39f8e3032d44a27fce129160,PodSandboxId:d2d78cd683b75429a8fb19a8d336313277432fb28b10e5755db7a3f0044414b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724959753005180580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7cd00a-94da-4e42-b7ae-289aab759c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:392a8fdf39b58ef48526e9b0ee737518780d384adbf9a00f6d58e386f06aca86,PodSandboxId:feea7addc66c506d27e991c62c20bda17a854abc868774c641d9459da7c0a1f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959752578046557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bqqq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801d9cfa-e1ad-4b31-9803-0030543fdc9e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3f6edccac07af4c68e2df70de1901a354b7e466a6480aa00a8c242372a1489,PodSandboxId:01947ff9720b96eec71aef069d1e00630167d535f0d83a9d399723674f8fc2e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724959752426178642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2658d81c7919220d900309ffd29970c4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.conta
iner.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ab99da1b1a7e8939462b754993d6616d0a80dc4262fe6f7d4d69c339c4a78c,PodSandboxId:d20310623d52ab5052b16cdf33028f557b5a6c2baf8a0a27a7a866b18ff8cace,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959752399739719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qjgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12168097-2d3c-467a-b4b5-c0ca7f85e4eb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",
\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d575c067854c198c0503704ca102d4bb9e9823a228df479b19a5ef172e162fc8,PodSandboxId:d42fe0859024616c82f93e163ef976e1501b5a23cca570ea6584dd321bdb15a1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724959752418561136,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7rp6z,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 7c922b32-e666-4b00-ab65-505632346112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46b7c110f2f7341eb54db4c08f1a782899e9205e069ca96dfbf295e38d3fc601,PodSandboxId:1cd9911d50578a09d9516f4ec4c519c25999741911a9df06637349a98a8b5dac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724959752194158517,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e88a504e-122b-4609-a0cc-4ad3115b3e4e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48ea370e88590ef75080636cd247905edf96c5c382c3fee2a7cf22b8c428407c,PodSandboxId:2efeb7211db1abde8ccda092af846f805b61900f51b39ffc9a91cd168b949318,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724959752332592108,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82b78fe84e206c02e
b995f9d886b23c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee8547993a84026cfd6c27f9f068228b1ea619a9dee19e99e3b8d07b22b23584,PodSandboxId:60334f5771834dbdd2d31990ab7c7f4f3d2b23cb4846dd94949e33ab8f9dd2d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724959752074925751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceba94f2170a08ee5a3d92beb3c9ffca,},Anno
tations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed600112468d9762d0ad5c0e554ca486c5cfc592114271d25cd84f5c09187db,PodSandboxId:02692297ba9f1af58d950a27f62a82af78a2a057750241400caef23a8bf2b2b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724959256520864398,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-psss7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69c11597-6cac-437a-9860-fc1a66cdc304,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29d7e6c72fdaaad5da65548ec44c18b60db3921c4ad2b63c9d767f1cc2c3fa75,PodSandboxId:f43b8211e2c7357fd7c4a4db43182c72f928a273075ca6fe8b80aedc84c67fac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959112076742735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qjgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12168097-2d3c-467a-b4b5-c0ca7f85e4eb,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc1a33f68ce7f972b7d1dd4dc36ce496d6339b1dc6a351d35cbb255ed61e8bd,PodSandboxId:6b5276c7cbe294143915bdc30d76458310de02fd916991c7782efc2e33f3190b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959112071270130,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-bqqq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801d9cfa-e1ad-4b31-9803-0030543fdc9e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5e9dd792be09a6e1909afd5dca42b303bac8476ef04f3254480b0f21ac53604,PodSandboxId:1f6e4f500f959ede186c4a9f85bff38a45872b813a20d08db40395e528d840a1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724959100077712568,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7rp6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c922b32-e666-4b00-ab65-505632346112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0cc96d9477c950d00d6920b1c4466bfd2e75dc0980383cdf073b75f577c37c,PodSandboxId:303aabbca3328ef64356c8322b3e76f6f3d1d35af5f892a4bec46277b7c9dd3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724959097303124595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88a504e-122b-4609-a0cc-4ad3115b3e4e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52fd2d668a925e55c334400b5d61dde55232d771983a9073e97f0ff37a6062fc,PodSandboxId:9d45b7206f46e64c0d5913c20a97148e0ac19d155b7ce7d3e371c593e763c4d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724959085808573312,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2658d81c7919220d900309ffd29970c4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1f91ce133bedd34081facf255eed45036eacfd938fbf58545d302a01d638dc0,PodSandboxId:8cfa0a246feb47d168af1872d1544f0b93affe0730c14516b4367b835d78d328,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724959085673137503,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-505269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceba94f2170a08ee5a3d92beb3c9ffca,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e9fab04d-33d8-4434-974e-373b5dbdccbd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6a1f0f0043f79       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago       Running             storage-provisioner       6                   d2d78cd683b75       storage-provisioner
	d24eca9daeddb       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   3                   2ba432bbcf20f       kube-controller-manager-ha-505269
	b090b02a2f6af       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            3                   2efeb7211db1a       kube-apiserver-ha-505269
	01f810a978048       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   4330e2b71d9ee       busybox-7dff88458-psss7
	39bb7e864296e       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Exited              kube-controller-manager   2                   2ba432bbcf20f       kube-controller-manager-ha-505269
	0c3c9479f487a       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   3506c46b68a6d       kube-vip-ha-505269
	de469f77c1414       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       5                   d2d78cd683b75       storage-provisioner
	392a8fdf39b58       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   feea7addc66c5       coredns-6f6b679f8f-bqqq5
	ed3f6edccac07       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   01947ff9720b9       etcd-ha-505269
	d575c067854c1       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   d42fe08590246       kindnet-7rp6z
	14ab99da1b1a7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   d20310623d52a       coredns-6f6b679f8f-qjgfg
	48ea370e88590       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      5 minutes ago       Exited              kube-apiserver            2                   2efeb7211db1a       kube-apiserver-ha-505269
	46b7c110f2f73       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      5 minutes ago       Running             kube-proxy                1                   1cd9911d50578       kube-proxy-hx822
	ee8547993a840       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      5 minutes ago       Running             kube-scheduler            1                   60334f5771834       kube-scheduler-ha-505269
	7ed600112468d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   02692297ba9f1       busybox-7dff88458-psss7
	29d7e6c72fdaa       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   f43b8211e2c73       coredns-6f6b679f8f-qjgfg
	1bc1a33f68ce7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   6b5276c7cbe29       coredns-6f6b679f8f-bqqq5
	f5e9dd792be09       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    16 minutes ago      Exited              kindnet-cni               0                   1f6e4f500f959       kindnet-7rp6z
	9b0cc96d9477c       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      16 minutes ago      Exited              kube-proxy                0                   303aabbca3328       kube-proxy-hx822
	52fd2d668a925       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      16 minutes ago      Exited              etcd                      0                   9d45b7206f46e       etcd-ha-505269
	d1f91ce133bed       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      16 minutes ago      Exited              kube-scheduler            0                   8cfa0a246feb4       kube-scheduler-ha-505269
	
	
	==> coredns [14ab99da1b1a7e8939462b754993d6616d0a80dc4262fe6f7d4d69c339c4a78c] <==
	Trace[1366078645]: [10.001089474s] [10.001089474s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [1bc1a33f68ce7f972b7d1dd4dc36ce496d6339b1dc6a351d35cbb255ed61e8bd] <==
	[INFO] 10.244.0.4:49913 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000073523s
	[INFO] 10.244.0.4:48970 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00128236s
	[INFO] 10.244.0.4:55431 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000054415s
	[INFO] 10.244.0.4:54011 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000100096s
	[INFO] 10.244.0.4:57804 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008517s
	[INFO] 10.244.2.2:41131 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117965s
	[INFO] 10.244.1.2:45186 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106338s
	[INFO] 10.244.1.2:55754 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090657s
	[INFO] 10.244.0.4:56674 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071901s
	[INFO] 10.244.2.2:38366 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108693s
	[INFO] 10.244.2.2:46323 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000210179s
	[INFO] 10.244.1.2:45861 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134603s
	[INFO] 10.244.1.2:56113 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000085692s
	[INFO] 10.244.1.2:56364 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124593s
	[INFO] 10.244.1.2:47826 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121887s
	[INFO] 10.244.0.4:45102 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000150401s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1902&timeout=5m33s&timeoutSeconds=333&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1902": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1902": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1901&timeout=8m42s&timeoutSeconds=522&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1902": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1902": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1902&timeout=7m53s&timeoutSeconds=473&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [29d7e6c72fdaaad5da65548ec44c18b60db3921c4ad2b63c9d767f1cc2c3fa75] <==
	[INFO] 10.244.1.2:43142 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092944s
	[INFO] 10.244.1.2:53648 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107032s
	[INFO] 10.244.0.4:57451 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001845348s
	[INFO] 10.244.2.2:52124 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000171457s
	[INFO] 10.244.2.2:35561 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076513s
	[INFO] 10.244.2.2:43265 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081638s
	[INFO] 10.244.1.2:37225 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147344s
	[INFO] 10.244.1.2:48252 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148007s
	[INFO] 10.244.0.4:60295 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013086s
	[INFO] 10.244.0.4:48577 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072897s
	[INFO] 10.244.0.4:48965 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087209s
	[INFO] 10.244.2.2:54597 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00016109s
	[INFO] 10.244.2.2:38187 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000150915s
	[INFO] 10.244.0.4:36462 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093452s
	[INFO] 10.244.0.4:43748 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071292s
	[INFO] 10.244.0.4:55783 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000059972s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1889&timeout=9m4s&timeoutSeconds=544&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1901&timeout=9m58s&timeoutSeconds=598&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [392a8fdf39b58ef48526e9b0ee737518780d384adbf9a00f6d58e386f06aca86] <==
	Trace[883420597]: [10.545522632s] [10.545522632s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:33790->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:51966->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-505269
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-505269
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=ha-505269
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T19_18_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:18:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-505269
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:34:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:33:20 +0000   Thu, 29 Aug 2024 19:33:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:33:20 +0000   Thu, 29 Aug 2024 19:33:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:33:20 +0000   Thu, 29 Aug 2024 19:33:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:33:20 +0000   Thu, 29 Aug 2024 19:33:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.56
	  Hostname:    ha-505269
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fddeecce7ac74aa7bff3cef388a156b1
	  System UUID:                fddeecce-7ac7-4aa7-bff3-cef388a156b1
	  Boot ID:                    1446f3e5-6319-4e2f-82e2-8ba9409f038f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-psss7              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-6f6b679f8f-bqqq5             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-6f6b679f8f-qjgfg             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-ha-505269                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kindnet-7rp6z                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-apiserver-ha-505269             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-505269    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-hx822                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-505269             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-505269                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   Starting                 4m48s                  kube-proxy       
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     16m (x7 over 16m)      kubelet          Node ha-505269 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)      kubelet          Node ha-505269 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)      kubelet          Node ha-505269 status is now: NodeHasSufficientMemory
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           16m                    node-controller  Node ha-505269 event: Registered Node ha-505269 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-505269 event: Registered Node ha-505269 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-505269 event: Registered Node ha-505269 in Controller
	  Warning  ContainerGCFailed        6m30s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             5m52s (x2 over 6m17s)  kubelet          Node ha-505269 status is now: NodeNotReady
	  Normal   RegisteredNode           4m47s                  node-controller  Node ha-505269 event: Registered Node ha-505269 in Controller
	  Normal   RegisteredNode           4m23s                  node-controller  Node ha-505269 event: Registered Node ha-505269 in Controller
	  Normal   RegisteredNode           3m18s                  node-controller  Node ha-505269 event: Registered Node ha-505269 in Controller
	  Normal   NodeNotReady             107s                   node-controller  Node ha-505269 status is now: NodeNotReady
	  Normal   NodeHasSufficientPID     84s (x2 over 16m)      kubelet          Node ha-505269 status is now: NodeHasSufficientPID
	  Normal   NodeReady                84s (x2 over 16m)      kubelet          Node ha-505269 status is now: NodeReady
	  Normal   NodeHasNoDiskPressure    84s (x2 over 16m)      kubelet          Node ha-505269 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  84s (x2 over 16m)      kubelet          Node ha-505269 status is now: NodeHasSufficientMemory
	
	
	Name:               ha-505269-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-505269-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=ha-505269
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_29T19_19_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:19:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-505269-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:34:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:30:31 +0000   Thu, 29 Aug 2024 19:30:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:30:31 +0000   Thu, 29 Aug 2024 19:30:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:30:31 +0000   Thu, 29 Aug 2024 19:30:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:30:31 +0000   Thu, 29 Aug 2024 19:30:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-505269-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc422cc060b34981a3c71775f3af90fa
	  System UUID:                dc422cc0-60b3-4981-a3c7-1775f3af90fa
	  Boot ID:                    ed67c8ee-a9f9-4390-bfbf-4216db72f9b4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hcgzg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-505269-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-sthc8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-505269-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-505269-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-jxbdt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-505269-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-505269-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m23s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-505269-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-505269-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-505269-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-505269-m02 event: Registered Node ha-505269-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-505269-m02 event: Registered Node ha-505269-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-505269-m02 event: Registered Node ha-505269-m02 in Controller
	  Normal  NodeNotReady             12m                    node-controller  Node ha-505269-m02 status is now: NodeNotReady
	  Normal  Starting                 5m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m14s (x8 over 5m14s)  kubelet          Node ha-505269-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m14s (x8 over 5m14s)  kubelet          Node ha-505269-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m14s (x7 over 5m14s)  kubelet          Node ha-505269-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m47s                  node-controller  Node ha-505269-m02 event: Registered Node ha-505269-m02 in Controller
	  Normal  RegisteredNode           4m23s                  node-controller  Node ha-505269-m02 event: Registered Node ha-505269-m02 in Controller
	  Normal  RegisteredNode           3m18s                  node-controller  Node ha-505269-m02 event: Registered Node ha-505269-m02 in Controller
	
	
	Name:               ha-505269-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-505269-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=ha-505269
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_29T19_21_30_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:21:30 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-505269-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:32:17 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 29 Aug 2024 19:31:56 +0000   Thu, 29 Aug 2024 19:32:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 29 Aug 2024 19:31:56 +0000   Thu, 29 Aug 2024 19:32:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 29 Aug 2024 19:31:56 +0000   Thu, 29 Aug 2024 19:32:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 29 Aug 2024 19:31:56 +0000   Thu, 29 Aug 2024 19:32:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    ha-505269-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a48c94cc9aca47538967ceed34ba2fed
	  System UUID:                a48c94cc-9aca-4753-8967-ceed34ba2fed
	  Boot ID:                    79a74b2c-30a1-4e3c-a4d1-a0d36fcb9738
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-99h9b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kindnet-5lkbf              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-proxy-b5p66           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-505269-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-505269-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-505269-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-505269-m04 event: Registered Node ha-505269-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-505269-m04 event: Registered Node ha-505269-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-505269-m04 event: Registered Node ha-505269-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-505269-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m47s                  node-controller  Node ha-505269-m04 event: Registered Node ha-505269-m04 in Controller
	  Normal   RegisteredNode           4m23s                  node-controller  Node ha-505269-m04 event: Registered Node ha-505269-m04 in Controller
	  Normal   RegisteredNode           3m18s                  node-controller  Node ha-505269-m04 event: Registered Node ha-505269-m04 in Controller
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-505269-m04 has been rebooted, boot id: 79a74b2c-30a1-4e3c-a4d1-a0d36fcb9738
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeReady                2m48s                  kubelet          Node ha-505269-m04 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  2m47s (x2 over 2m48s)  kubelet          Node ha-505269-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s (x2 over 2m48s)  kubelet          Node ha-505269-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x2 over 2m48s)  kubelet          Node ha-505269-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             106s (x2 over 4m7s)    node-controller  Node ha-505269-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.251780] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.063740] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055892] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.200106] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.121292] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.278954] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.975871] systemd-fstab-generator[753]: Ignoring "noauto" option for root device
	[Aug29 19:18] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.064250] kauditd_printk_skb: 158 callbacks suppressed
	[  +8.795220] kauditd_printk_skb: 79 callbacks suppressed
	[  +1.256542] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +6.194607] kauditd_printk_skb: 54 callbacks suppressed
	[Aug29 19:19] kauditd_printk_skb: 24 callbacks suppressed
	[Aug29 19:26] kauditd_printk_skb: 1 callbacks suppressed
	[Aug29 19:29] systemd-fstab-generator[3644]: Ignoring "noauto" option for root device
	[  +0.154243] systemd-fstab-generator[3656]: Ignoring "noauto" option for root device
	[  +0.177111] systemd-fstab-generator[3670]: Ignoring "noauto" option for root device
	[  +0.150845] systemd-fstab-generator[3682]: Ignoring "noauto" option for root device
	[  +0.294875] systemd-fstab-generator[3710]: Ignoring "noauto" option for root device
	[  +0.793246] systemd-fstab-generator[3803]: Ignoring "noauto" option for root device
	[  +3.742669] kauditd_printk_skb: 122 callbacks suppressed
	[ +12.234337] kauditd_printk_skb: 85 callbacks suppressed
	[ +10.059149] kauditd_printk_skb: 1 callbacks suppressed
	[ +22.864317] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [52fd2d668a925e55c334400b5d61dde55232d771983a9073e97f0ff37a6062fc] <==
	2024/08/29 19:27:35 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/29 19:27:35 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/29 19:27:35 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/29 19:27:35 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-29T19:27:35.173724Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.56:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-29T19:27:35.173823Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.56:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-29T19:27:35.173899Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"be139f16c87a8e87","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-29T19:27:35.175242Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"info","ts":"2024-08-29T19:27:35.175482Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"info","ts":"2024-08-29T19:27:35.175527Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"info","ts":"2024-08-29T19:27:35.175587Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"be139f16c87a8e87","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"info","ts":"2024-08-29T19:27:35.175668Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"be139f16c87a8e87","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"info","ts":"2024-08-29T19:27:35.175727Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"be139f16c87a8e87","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"info","ts":"2024-08-29T19:27:35.175755Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"info","ts":"2024-08-29T19:27:35.175780Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f5dc62d3f2837830"}
	{"level":"info","ts":"2024-08-29T19:27:35.176129Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f5dc62d3f2837830"}
	{"level":"info","ts":"2024-08-29T19:27:35.176242Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f5dc62d3f2837830"}
	{"level":"info","ts":"2024-08-29T19:27:35.176353Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830"}
	{"level":"info","ts":"2024-08-29T19:27:35.176410Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830"}
	{"level":"info","ts":"2024-08-29T19:27:35.176469Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"be139f16c87a8e87","remote-peer-id":"f5dc62d3f2837830"}
	{"level":"info","ts":"2024-08-29T19:27:35.176498Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f5dc62d3f2837830"}
	{"level":"info","ts":"2024-08-29T19:27:35.182723Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.56:2380"}
	{"level":"info","ts":"2024-08-29T19:27:35.182846Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.56:2380"}
	{"level":"info","ts":"2024-08-29T19:27:35.182871Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-505269","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.56:2380"],"advertise-client-urls":["https://192.168.39.56:2379"]}
	{"level":"warn","ts":"2024-08-29T19:27:35.182902Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.575390709s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	
	
	==> etcd [ed3f6edccac07af4c68e2df70de1901a354b7e466a6480aa00a8c242372a1489] <==
	{"level":"info","ts":"2024-08-29T19:31:17.999308Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"be139f16c87a8e87","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"info","ts":"2024-08-29T19:31:17.999809Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"be139f16c87a8e87","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"info","ts":"2024-08-29T19:31:18.004823Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"be139f16c87a8e87","to":"324dbe8b03e4639e","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-29T19:31:18.004914Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"be139f16c87a8e87","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"info","ts":"2024-08-29T19:31:18.016370Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"be139f16c87a8e87","to":"324dbe8b03e4639e","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-29T19:31:18.016419Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"be139f16c87a8e87","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"warn","ts":"2024-08-29T19:31:18.414087Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"324dbe8b03e4639e","rtt":"0s","error":"dial tcp 192.168.39.178:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-29T19:31:18.414152Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"324dbe8b03e4639e","rtt":"0s","error":"dial tcp 192.168.39.178:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-29T19:32:11.016420Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be139f16c87a8e87 switched to configuration voters=(13696465811965382279 17716143696615012400)"}
	{"level":"info","ts":"2024-08-29T19:32:11.018629Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"7fd3c3974c415d44","local-member-id":"be139f16c87a8e87","removed-remote-peer-id":"324dbe8b03e4639e","removed-remote-peer-urls":["https://192.168.39.178:2380"]}
	{"level":"info","ts":"2024-08-29T19:32:11.018742Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"warn","ts":"2024-08-29T19:32:11.019099Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"info","ts":"2024-08-29T19:32:11.019167Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"warn","ts":"2024-08-29T19:32:11.019431Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"info","ts":"2024-08-29T19:32:11.019474Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"info","ts":"2024-08-29T19:32:11.019752Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"be139f16c87a8e87","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"warn","ts":"2024-08-29T19:32:11.020616Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"be139f16c87a8e87","remote-peer-id":"324dbe8b03e4639e","error":"context canceled"}
	{"level":"warn","ts":"2024-08-29T19:32:11.020942Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"324dbe8b03e4639e","error":"failed to read 324dbe8b03e4639e on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-29T19:32:11.021175Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"be139f16c87a8e87","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"warn","ts":"2024-08-29T19:32:11.021405Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"be139f16c87a8e87","remote-peer-id":"324dbe8b03e4639e","error":"context canceled"}
	{"level":"info","ts":"2024-08-29T19:32:11.021496Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"be139f16c87a8e87","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"info","ts":"2024-08-29T19:32:11.021761Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"324dbe8b03e4639e"}
	{"level":"info","ts":"2024-08-29T19:32:11.021811Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"be139f16c87a8e87","removed-remote-peer-id":"324dbe8b03e4639e"}
	{"level":"warn","ts":"2024-08-29T19:32:11.043527Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"be139f16c87a8e87","remote-peer-id-stream-handler":"be139f16c87a8e87","remote-peer-id-from":"324dbe8b03e4639e"}
	{"level":"warn","ts":"2024-08-29T19:32:11.046547Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"be139f16c87a8e87","remote-peer-id-stream-handler":"be139f16c87a8e87","remote-peer-id-from":"324dbe8b03e4639e"}
	
	
	==> kernel <==
	 19:34:44 up 17 min,  0 users,  load average: 0.82, 0.90, 0.62
	Linux ha-505269 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [d575c067854c198c0503704ca102d4bb9e9823a228df479b19a5ef172e162fc8] <==
	I0829 19:34:03.585464       1 main.go:322] Node ha-505269-m02 has CIDR [10.244.1.0/24] 
	I0829 19:34:13.579606       1 main.go:295] Handling node with IPs: map[192.168.39.56:{}]
	I0829 19:34:13.579710       1 main.go:299] handling current node
	I0829 19:34:13.579740       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0829 19:34:13.579758       1 main.go:322] Node ha-505269-m02 has CIDR [10.244.1.0/24] 
	I0829 19:34:13.579887       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0829 19:34:13.579937       1 main.go:322] Node ha-505269-m04 has CIDR [10.244.3.0/24] 
	I0829 19:34:23.579100       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0829 19:34:23.579157       1 main.go:322] Node ha-505269-m04 has CIDR [10.244.3.0/24] 
	I0829 19:34:23.579361       1 main.go:295] Handling node with IPs: map[192.168.39.56:{}]
	I0829 19:34:23.579388       1 main.go:299] handling current node
	I0829 19:34:23.579399       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0829 19:34:23.579404       1 main.go:322] Node ha-505269-m02 has CIDR [10.244.1.0/24] 
	I0829 19:34:33.587223       1 main.go:295] Handling node with IPs: map[192.168.39.56:{}]
	I0829 19:34:33.587390       1 main.go:299] handling current node
	I0829 19:34:33.587423       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0829 19:34:33.587446       1 main.go:322] Node ha-505269-m02 has CIDR [10.244.1.0/24] 
	I0829 19:34:33.587607       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0829 19:34:33.587629       1 main.go:322] Node ha-505269-m04 has CIDR [10.244.3.0/24] 
	I0829 19:34:43.579083       1 main.go:295] Handling node with IPs: map[192.168.39.56:{}]
	I0829 19:34:43.579181       1 main.go:299] handling current node
	I0829 19:34:43.579197       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0829 19:34:43.579205       1 main.go:322] Node ha-505269-m02 has CIDR [10.244.1.0/24] 
	I0829 19:34:43.579393       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0829 19:34:43.579425       1 main.go:322] Node ha-505269-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [f5e9dd792be09a6e1909afd5dca42b303bac8476ef04f3254480b0f21ac53604] <==
	I0829 19:27:01.258595       1 main.go:322] Node ha-505269-m04 has CIDR [10.244.3.0/24] 
	I0829 19:27:11.258120       1 main.go:295] Handling node with IPs: map[192.168.39.56:{}]
	I0829 19:27:11.258232       1 main.go:299] handling current node
	I0829 19:27:11.258273       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0829 19:27:11.258292       1 main.go:322] Node ha-505269-m02 has CIDR [10.244.1.0/24] 
	I0829 19:27:11.258467       1 main.go:295] Handling node with IPs: map[192.168.39.178:{}]
	I0829 19:27:11.258534       1 main.go:322] Node ha-505269-m03 has CIDR [10.244.2.0/24] 
	I0829 19:27:11.258630       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0829 19:27:11.258652       1 main.go:322] Node ha-505269-m04 has CIDR [10.244.3.0/24] 
	I0829 19:27:21.258540       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0829 19:27:21.258576       1 main.go:322] Node ha-505269-m04 has CIDR [10.244.3.0/24] 
	I0829 19:27:21.258721       1 main.go:295] Handling node with IPs: map[192.168.39.56:{}]
	I0829 19:27:21.258727       1 main.go:299] handling current node
	I0829 19:27:21.258743       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0829 19:27:21.258747       1 main.go:322] Node ha-505269-m02 has CIDR [10.244.1.0/24] 
	I0829 19:27:21.258791       1 main.go:295] Handling node with IPs: map[192.168.39.178:{}]
	I0829 19:27:21.258795       1 main.go:322] Node ha-505269-m03 has CIDR [10.244.2.0/24] 
	I0829 19:27:31.267130       1 main.go:295] Handling node with IPs: map[192.168.39.178:{}]
	I0829 19:27:31.267398       1 main.go:322] Node ha-505269-m03 has CIDR [10.244.2.0/24] 
	I0829 19:27:31.267644       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0829 19:27:31.267692       1 main.go:322] Node ha-505269-m04 has CIDR [10.244.3.0/24] 
	I0829 19:27:31.267786       1 main.go:295] Handling node with IPs: map[192.168.39.56:{}]
	I0829 19:27:31.267807       1 main.go:299] handling current node
	I0829 19:27:31.267844       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0829 19:27:31.267860       1 main.go:322] Node ha-505269-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [48ea370e88590ef75080636cd247905edf96c5c382c3fee2a7cf22b8c428407c] <==
	I0829 19:29:12.739081       1 options.go:228] external host was not specified, using 192.168.39.56
	I0829 19:29:12.750599       1 server.go:142] Version: v1.31.0
	I0829 19:29:12.750630       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:29:13.316201       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0829 19:29:13.342872       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0829 19:29:13.351920       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0829 19:29:13.351958       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0829 19:29:13.352197       1 instance.go:232] Using reconciler: lease
	W0829 19:29:33.315496       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0829 19:29:33.315829       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0829 19:29:33.353148       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [b090b02a2f6af71741e9e17cccfeb77b790febb24ce1fb5ec191d971424b5378] <==
	I0829 19:30:00.536598       1 shared_informer.go:320] Caches are synced for configmaps
	I0829 19:30:00.536750       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0829 19:30:00.536778       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0829 19:30:00.542462       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0829 19:30:00.544197       1 aggregator.go:171] initial CRD sync complete...
	I0829 19:30:00.544240       1 autoregister_controller.go:144] Starting autoregister controller
	I0829 19:30:00.544247       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0829 19:30:00.544257       1 cache.go:39] Caches are synced for autoregister controller
	I0829 19:30:00.545618       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0829 19:30:00.552119       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0829 19:30:00.552225       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	W0829 19:30:00.562428       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.178 192.168.39.68]
	I0829 19:30:00.575914       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0829 19:30:00.576087       1 policy_source.go:224] refreshing policies
	I0829 19:30:00.576303       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0829 19:30:00.590887       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0829 19:30:00.634740       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0829 19:30:00.637459       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0829 19:30:00.665600       1 controller.go:615] quota admission added evaluator for: endpoints
	I0829 19:30:00.677192       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0829 19:30:00.680383       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0829 19:30:01.443128       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0829 19:30:01.794021       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.178 192.168.39.56 192.168.39.68]
	W0829 19:30:11.793500       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.56 192.168.39.68]
	W0829 19:32:21.804465       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.56 192.168.39.68]
	
	
	==> kube-controller-manager [39bb7e864296eb98a7fa1d4cf9729fd6fe8a035d7acdcb8024f1172b92b20424] <==
	I0829 19:29:45.889693       1 serving.go:386] Generated self-signed cert in-memory
	I0829 19:29:46.599752       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0829 19:29:46.599855       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:29:46.601414       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0829 19:29:46.601582       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0829 19:29:46.601648       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0829 19:29:46.601725       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0829 19:29:56.605285       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.56:8443/healthz\": dial tcp 192.168.39.56:8443: connect: connection refused"
	
	
	==> kube-controller-manager [d24eca9daeddb01201f9b34221347d5cb309931c118348968da060d6c1bf376c] <==
	I0829 19:32:57.175835       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="97.162µs"
	I0829 19:32:58.021762       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:32:58.046676       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:32:58.073059       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.141726ms"
	I0829 19:32:58.073669       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="52.352µs"
	I0829 19:32:58.105720       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269"
	E0829 19:33:01.947675       1 gc_controller.go:151] "Failed to get node" err="node \"ha-505269-m03\" not found" logger="pod-garbage-collector-controller" node="ha-505269-m03"
	E0829 19:33:01.947776       1 gc_controller.go:151] "Failed to get node" err="node \"ha-505269-m03\" not found" logger="pod-garbage-collector-controller" node="ha-505269-m03"
	E0829 19:33:01.947803       1 gc_controller.go:151] "Failed to get node" err="node \"ha-505269-m03\" not found" logger="pod-garbage-collector-controller" node="ha-505269-m03"
	E0829 19:33:01.947827       1 gc_controller.go:151] "Failed to get node" err="node \"ha-505269-m03\" not found" logger="pod-garbage-collector-controller" node="ha-505269-m03"
	E0829 19:33:01.947851       1 gc_controller.go:151] "Failed to get node" err="node \"ha-505269-m03\" not found" logger="pod-garbage-collector-controller" node="ha-505269-m03"
	I0829 19:33:02.318918       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:33:08.188558       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269-m04"
	I0829 19:33:12.403753       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269"
	I0829 19:33:14.465568       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="28.199936ms"
	I0829 19:33:14.465731       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="65.116µs"
	I0829 19:33:14.466024       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-dnl5b EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-dnl5b\": the object has been modified; please apply your changes to the latest version and try again"
	I0829 19:33:14.466511       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"56695963-7264-4e34-b7d9-4744e56cebd1", APIVersion:"v1", ResourceVersion:"300", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-dnl5b EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-dnl5b": the object has been modified; please apply your changes to the latest version and try again
	I0829 19:33:14.562780       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="50.122605ms"
	I0829 19:33:14.563022       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="122.068µs"
	I0829 19:33:14.660868       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="21.942692ms"
	I0829 19:33:14.661547       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="90.026µs"
	I0829 19:33:20.427083       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269"
	I0829 19:33:20.457368       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269"
	I0829 19:33:22.280879       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-505269"
	
	
	==> kube-proxy [46b7c110f2f7341eb54db4c08f1a782899e9205e069ca96dfbf295e38d3fc601] <==
	I0829 19:29:55.712702       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 19:29:55.713329       1 server.go:483] "Version info" version="v1.31.0"
	I0829 19:29:55.713379       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:29:55.715361       1 config.go:197] "Starting service config controller"
	I0829 19:29:55.715438       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 19:29:55.715478       1 config.go:104] "Starting endpoint slice config controller"
	I0829 19:29:55.715494       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 19:29:55.716372       1 config.go:326] "Starting node config controller"
	I0829 19:29:55.716411       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0829 19:29:58.746205       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0829 19:29:58.746369       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:29:58.746511       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:29:58.746612       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:29:58.746671       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:29:58.746755       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-505269&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:29:58.746792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-505269&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:30:01.817856       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:30:01.817945       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:30:01.818116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:30:01.818601       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:30:01.818688       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-505269&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:30:01.818724       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-505269&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	I0829 19:30:04.216060       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 19:30:04.617083       1 shared_informer.go:320] Caches are synced for node config
	I0829 19:30:04.815764       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [9b0cc96d9477c950d00d6920b1c4466bfd2e75dc0980383cdf073b75f577c37c] <==
	E0829 19:26:29.851423       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1856\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:26:29.851672       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-505269&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:26:29.851845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-505269&resourceVersion=1900\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:26:32.923030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-505269&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:26:32.923136       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-505269&resourceVersion=1900\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:26:32.923047       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:26:32.923285       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1856\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:26:35.994951       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1865": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:26:35.995163       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1865\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:26:39.066808       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-505269&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:26:39.067021       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-505269&resourceVersion=1900\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:26:39.067112       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:26:39.067162       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1856\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:26:48.282373       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1865": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:26:48.282601       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1865\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:26:51.355204       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:26:51.355653       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1856\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:26:51.355840       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-505269&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:26:51.355900       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-505269&resourceVersion=1900\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:27:09.787676       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-505269&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:27:09.787778       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-505269&resourceVersion=1900\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:27:09.787889       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1865": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:27:09.787926       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1865\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 19:27:19.002377       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 19:27:19.002559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1856\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [d1f91ce133bedd34081facf255eed45036eacfd938fbf58545d302a01d638dc0] <==
	E0829 19:21:30.554445       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-shg8j\": pod kube-proxy-shg8j is already assigned to node \"ha-505269-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-shg8j" node="ha-505269-m04"
	E0829 19:21:30.554616       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 05405fa6-d40f-446d-ad32-18b243d7b162(kube-system/kube-proxy-shg8j) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-shg8j"
	E0829 19:21:30.554728       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-shg8j\": pod kube-proxy-shg8j is already assigned to node \"ha-505269-m04\"" pod="kube-system/kube-proxy-shg8j"
	I0829 19:21:30.554863       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-shg8j" node="ha-505269-m04"
	E0829 19:21:30.555526       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5lkbf\": pod kindnet-5lkbf is already assigned to node \"ha-505269-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-5lkbf" node="ha-505269-m04"
	E0829 19:21:30.558296       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 112e2462-a26a-4f91-a405-dab3468f9071(kube-system/kindnet-5lkbf) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-5lkbf"
	E0829 19:21:30.559049       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5lkbf\": pod kindnet-5lkbf is already assigned to node \"ha-505269-m04\"" pod="kube-system/kindnet-5lkbf"
	I0829 19:21:30.559106       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-5lkbf" node="ha-505269-m04"
	E0829 19:27:25.046614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0829 19:27:25.874772       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0829 19:27:27.555185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0829 19:27:28.682860       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0829 19:27:28.861646       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0829 19:27:29.068173       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0829 19:27:29.565552       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0829 19:27:29.606070       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0829 19:27:30.829134       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0829 19:27:31.977335       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0829 19:27:32.317188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0829 19:27:32.374845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0829 19:27:32.908740       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0829 19:27:33.024045       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	I0829 19:27:35.120341       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0829 19:27:35.121683       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0829 19:27:35.124295       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [ee8547993a84026cfd6c27f9f068228b1ea619a9dee19e99e3b8d07b22b23584] <==
	W0829 19:29:51.973513       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.56:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.56:8443: connect: connection refused
	E0829 19:29:51.973590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.56:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.56:8443: connect: connection refused" logger="UnhandledError"
	W0829 19:29:52.090201       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.56:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.56:8443: connect: connection refused
	E0829 19:29:52.090267       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.56:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.56:8443: connect: connection refused" logger="UnhandledError"
	W0829 19:29:52.679688       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.56:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.56:8443: connect: connection refused
	E0829 19:29:52.679748       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.56:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.56:8443: connect: connection refused" logger="UnhandledError"
	W0829 19:29:53.108961       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.56:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.56:8443: connect: connection refused
	E0829 19:29:53.109234       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.56:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.56:8443: connect: connection refused" logger="UnhandledError"
	W0829 19:29:53.669469       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.56:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.56:8443: connect: connection refused
	E0829 19:29:53.669560       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.56:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.56:8443: connect: connection refused" logger="UnhandledError"
	W0829 19:29:53.763460       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.56:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.56:8443: connect: connection refused
	E0829 19:29:53.763538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.56:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.56:8443: connect: connection refused" logger="UnhandledError"
	W0829 19:29:53.787958       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.56:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.56:8443: connect: connection refused
	E0829 19:29:53.788140       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.56:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.56:8443: connect: connection refused" logger="UnhandledError"
	W0829 19:29:53.839265       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.56:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.56:8443: connect: connection refused
	E0829 19:29:53.839381       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.56:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.56:8443: connect: connection refused" logger="UnhandledError"
	W0829 19:29:54.211253       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.56:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.56:8443: connect: connection refused
	E0829 19:29:54.211336       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.56:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.56:8443: connect: connection refused" logger="UnhandledError"
	W0829 19:29:54.860090       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.56:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.56:8443: connect: connection refused
	E0829 19:29:54.860151       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.56:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.56:8443: connect: connection refused" logger="UnhandledError"
	W0829 19:29:55.624620       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.56:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.56:8443: connect: connection refused
	E0829 19:29:55.624746       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.56:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.56:8443: connect: connection refused" logger="UnhandledError"
	W0829 19:29:56.826084       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.56:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.56:8443: connect: connection refused
	E0829 19:29:56.826135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.56:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.56:8443: connect: connection refused" logger="UnhandledError"
	I0829 19:30:15.367741       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 19:33:14 ha-505269 kubelet[1314]: E0829 19:33:14.604089    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959994603722570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:33:14 ha-505269 kubelet[1314]: E0829 19:33:14.604142    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959994603722570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:33:24 ha-505269 kubelet[1314]: E0829 19:33:24.605179    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960004604875167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:33:24 ha-505269 kubelet[1314]: E0829 19:33:24.605230    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960004604875167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:33:34 ha-505269 kubelet[1314]: E0829 19:33:34.606760    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960014606566293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:33:34 ha-505269 kubelet[1314]: E0829 19:33:34.606847    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960014606566293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:33:44 ha-505269 kubelet[1314]: E0829 19:33:44.608750    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960024608277587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:33:44 ha-505269 kubelet[1314]: E0829 19:33:44.608866    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960024608277587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:33:54 ha-505269 kubelet[1314]: E0829 19:33:54.610572    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960034610182040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:33:54 ha-505269 kubelet[1314]: E0829 19:33:54.610643    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960034610182040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:34:04 ha-505269 kubelet[1314]: E0829 19:34:04.611799    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960044611425140,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:34:04 ha-505269 kubelet[1314]: E0829 19:34:04.611841    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960044611425140,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:34:14 ha-505269 kubelet[1314]: E0829 19:34:14.379774    1314 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 19:34:14 ha-505269 kubelet[1314]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 19:34:14 ha-505269 kubelet[1314]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 19:34:14 ha-505269 kubelet[1314]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 19:34:14 ha-505269 kubelet[1314]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 19:34:14 ha-505269 kubelet[1314]: E0829 19:34:14.613186    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960054612813674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:34:14 ha-505269 kubelet[1314]: E0829 19:34:14.613286    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960054612813674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:34:24 ha-505269 kubelet[1314]: E0829 19:34:24.617323    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960064616201601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:34:24 ha-505269 kubelet[1314]: E0829 19:34:24.617372    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960064616201601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:34:34 ha-505269 kubelet[1314]: E0829 19:34:34.628149    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960074627721709,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:34:34 ha-505269 kubelet[1314]: E0829 19:34:34.628759    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960074627721709,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:34:44 ha-505269 kubelet[1314]: E0829 19:34:44.630623    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960084629846171,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:34:44 ha-505269 kubelet[1314]: E0829 19:34:44.630653    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960084629846171,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 19:34:43.611262   39019 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19530-11185/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-505269 -n ha-505269
helpers_test.go:261: (dbg) Run:  kubectl --context ha-505269 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (324.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-197790
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-197790
E0829 19:53:45.974819   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-197790: exit status 82 (2m1.824390119s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-197790-m03"  ...
	* Stopping node "multinode-197790-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-197790" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-197790 --wait=true -v=8 --alsologtostderr
E0829 19:54:41.010506   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:56:37.943016   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-197790 --wait=true -v=8 --alsologtostderr: (3m20.67870315s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-197790
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-197790 -n multinode-197790
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-197790 logs -n 25: (1.480148891s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-197790 ssh -n                                                                 | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | multinode-197790-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-197790 cp multinode-197790-m02:/home/docker/cp-test.txt                       | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1846508817/001/cp-test_multinode-197790-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-197790 ssh -n                                                                 | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | multinode-197790-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-197790 cp multinode-197790-m02:/home/docker/cp-test.txt                       | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | multinode-197790:/home/docker/cp-test_multinode-197790-m02_multinode-197790.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-197790 ssh -n                                                                 | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | multinode-197790-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-197790 ssh -n multinode-197790 sudo cat                                       | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | /home/docker/cp-test_multinode-197790-m02_multinode-197790.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-197790 cp multinode-197790-m02:/home/docker/cp-test.txt                       | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | multinode-197790-m03:/home/docker/cp-test_multinode-197790-m02_multinode-197790-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-197790 ssh -n                                                                 | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | multinode-197790-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-197790 ssh -n multinode-197790-m03 sudo cat                                   | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | /home/docker/cp-test_multinode-197790-m02_multinode-197790-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-197790 cp testdata/cp-test.txt                                                | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | multinode-197790-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-197790 ssh -n                                                                 | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | multinode-197790-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-197790 cp multinode-197790-m03:/home/docker/cp-test.txt                       | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1846508817/001/cp-test_multinode-197790-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-197790 ssh -n                                                                 | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | multinode-197790-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-197790 cp multinode-197790-m03:/home/docker/cp-test.txt                       | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | multinode-197790:/home/docker/cp-test_multinode-197790-m03_multinode-197790.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-197790 ssh -n                                                                 | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | multinode-197790-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-197790 ssh -n multinode-197790 sudo cat                                       | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | /home/docker/cp-test_multinode-197790-m03_multinode-197790.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-197790 cp multinode-197790-m03:/home/docker/cp-test.txt                       | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | multinode-197790-m02:/home/docker/cp-test_multinode-197790-m03_multinode-197790-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-197790 ssh -n                                                                 | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | multinode-197790-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-197790 ssh -n multinode-197790-m02 sudo cat                                   | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | /home/docker/cp-test_multinode-197790-m03_multinode-197790-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-197790 node stop m03                                                          | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	| node    | multinode-197790 node start                                                             | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-197790                                                                | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC |                     |
	| stop    | -p multinode-197790                                                                     | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC |                     |
	| start   | -p multinode-197790                                                                     | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:53 UTC | 29 Aug 24 19:57 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-197790                                                                | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:57 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 19:53:53
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 19:53:53.575195   48766 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:53:53.575408   48766 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:53:53.575419   48766 out.go:358] Setting ErrFile to fd 2...
	I0829 19:53:53.575430   48766 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:53:53.575651   48766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 19:53:53.576174   48766 out.go:352] Setting JSON to false
	I0829 19:53:53.577072   48766 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5781,"bootTime":1724955453,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 19:53:53.577126   48766 start.go:139] virtualization: kvm guest
	I0829 19:53:53.579146   48766 out.go:177] * [multinode-197790] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 19:53:53.580593   48766 out.go:177]   - MINIKUBE_LOCATION=19530
	I0829 19:53:53.580599   48766 notify.go:220] Checking for updates...
	I0829 19:53:53.583128   48766 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:53:53.584476   48766 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 19:53:53.585844   48766 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 19:53:53.587140   48766 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:53:53.588374   48766 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:53:53.589958   48766 config.go:182] Loaded profile config "multinode-197790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:53:53.590040   48766 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:53:53.590424   48766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:53:53.590460   48766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:53:53.605709   48766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44263
	I0829 19:53:53.606132   48766 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:53:53.606705   48766 main.go:141] libmachine: Using API Version  1
	I0829 19:53:53.606738   48766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:53:53.607141   48766 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:53:53.607321   48766 main.go:141] libmachine: (multinode-197790) Calling .DriverName
	I0829 19:53:53.641844   48766 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 19:53:53.643231   48766 start.go:297] selected driver: kvm2
	I0829 19:53:53.643244   48766 start.go:901] validating driver "kvm2" against &{Name:multinode-197790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-197790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.247 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.131 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:53:53.643401   48766 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:53:53.643702   48766 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:53:53.643782   48766 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19530-11185/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 19:53:53.658191   48766 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 19:53:53.658843   48766 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:53:53.658921   48766 cni.go:84] Creating CNI manager for ""
	I0829 19:53:53.658935   48766 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0829 19:53:53.659007   48766 start.go:340] cluster config:
	{Name:multinode-197790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-197790 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.247 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.131 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:53:53.659139   48766 iso.go:125] acquiring lock: {Name:mk1c9d3ac7f423dd4657884e37bdf4359f6328d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:53:53.660810   48766 out.go:177] * Starting "multinode-197790" primary control-plane node in "multinode-197790" cluster
	I0829 19:53:53.661937   48766 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:53:53.661969   48766 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 19:53:53.661978   48766 cache.go:56] Caching tarball of preloaded images
	I0829 19:53:53.662054   48766 preload.go:172] Found /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 19:53:53.662067   48766 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 19:53:53.662189   48766 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/multinode-197790/config.json ...
	I0829 19:53:53.662386   48766 start.go:360] acquireMachinesLock for multinode-197790: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:53:53.662437   48766 start.go:364] duration metric: took 33.393µs to acquireMachinesLock for "multinode-197790"
	I0829 19:53:53.662457   48766 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:53:53.662466   48766 fix.go:54] fixHost starting: 
	I0829 19:53:53.662790   48766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:53:53.662835   48766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:53:53.676636   48766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44399
	I0829 19:53:53.677066   48766 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:53:53.677543   48766 main.go:141] libmachine: Using API Version  1
	I0829 19:53:53.677579   48766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:53:53.677890   48766 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:53:53.678057   48766 main.go:141] libmachine: (multinode-197790) Calling .DriverName
	I0829 19:53:53.678235   48766 main.go:141] libmachine: (multinode-197790) Calling .GetState
	I0829 19:53:53.679617   48766 fix.go:112] recreateIfNeeded on multinode-197790: state=Running err=<nil>
	W0829 19:53:53.679637   48766 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:53:53.681385   48766 out.go:177] * Updating the running kvm2 "multinode-197790" VM ...
	I0829 19:53:53.682670   48766 machine.go:93] provisionDockerMachine start ...
	I0829 19:53:53.682687   48766 main.go:141] libmachine: (multinode-197790) Calling .DriverName
	I0829 19:53:53.682873   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHHostname
	I0829 19:53:53.684862   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:53:53.685300   48766 main.go:141] libmachine: (multinode-197790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:87:d9", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:48:29 +0000 UTC Type:0 Mac:52:54:00:97:87:d9 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-197790 Clientid:01:52:54:00:97:87:d9}
	I0829 19:53:53.685331   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined IP address 192.168.39.245 and MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:53:53.685440   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHPort
	I0829 19:53:53.685582   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:53:53.685724   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:53:53.685818   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHUsername
	I0829 19:53:53.685986   48766 main.go:141] libmachine: Using SSH client type: native
	I0829 19:53:53.686157   48766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0829 19:53:53.686167   48766 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:53:53.799867   48766 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-197790
	
	I0829 19:53:53.799895   48766 main.go:141] libmachine: (multinode-197790) Calling .GetMachineName
	I0829 19:53:53.800132   48766 buildroot.go:166] provisioning hostname "multinode-197790"
	I0829 19:53:53.800161   48766 main.go:141] libmachine: (multinode-197790) Calling .GetMachineName
	I0829 19:53:53.800344   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHHostname
	I0829 19:53:53.803053   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:53:53.803426   48766 main.go:141] libmachine: (multinode-197790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:87:d9", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:48:29 +0000 UTC Type:0 Mac:52:54:00:97:87:d9 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-197790 Clientid:01:52:54:00:97:87:d9}
	I0829 19:53:53.803452   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined IP address 192.168.39.245 and MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:53:53.803619   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHPort
	I0829 19:53:53.803802   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:53:53.803995   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:53:53.804122   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHUsername
	I0829 19:53:53.804272   48766 main.go:141] libmachine: Using SSH client type: native
	I0829 19:53:53.804480   48766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0829 19:53:53.804498   48766 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-197790 && echo "multinode-197790" | sudo tee /etc/hostname
	I0829 19:53:53.929467   48766 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-197790
	
	I0829 19:53:53.929501   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHHostname
	I0829 19:53:53.932159   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:53:53.932491   48766 main.go:141] libmachine: (multinode-197790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:87:d9", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:48:29 +0000 UTC Type:0 Mac:52:54:00:97:87:d9 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-197790 Clientid:01:52:54:00:97:87:d9}
	I0829 19:53:53.932530   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined IP address 192.168.39.245 and MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:53:53.932671   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHPort
	I0829 19:53:53.932851   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:53:53.933017   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:53:53.933148   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHUsername
	I0829 19:53:53.933298   48766 main.go:141] libmachine: Using SSH client type: native
	I0829 19:53:53.933460   48766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0829 19:53:53.933476   48766 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-197790' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-197790/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-197790' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:53:54.052234   48766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:53:54.052257   48766 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 19:53:54.052285   48766 buildroot.go:174] setting up certificates
	I0829 19:53:54.052294   48766 provision.go:84] configureAuth start
	I0829 19:53:54.052302   48766 main.go:141] libmachine: (multinode-197790) Calling .GetMachineName
	I0829 19:53:54.052534   48766 main.go:141] libmachine: (multinode-197790) Calling .GetIP
	I0829 19:53:54.055298   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:53:54.055670   48766 main.go:141] libmachine: (multinode-197790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:87:d9", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:48:29 +0000 UTC Type:0 Mac:52:54:00:97:87:d9 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-197790 Clientid:01:52:54:00:97:87:d9}
	I0829 19:53:54.055694   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined IP address 192.168.39.245 and MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:53:54.055830   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHHostname
	I0829 19:53:54.057733   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:53:54.058043   48766 main.go:141] libmachine: (multinode-197790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:87:d9", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:48:29 +0000 UTC Type:0 Mac:52:54:00:97:87:d9 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-197790 Clientid:01:52:54:00:97:87:d9}
	I0829 19:53:54.058069   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined IP address 192.168.39.245 and MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:53:54.058217   48766 provision.go:143] copyHostCerts
	I0829 19:53:54.058260   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 19:53:54.058298   48766 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 19:53:54.058315   48766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 19:53:54.058396   48766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 19:53:54.058483   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 19:53:54.058508   48766 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 19:53:54.058515   48766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 19:53:54.058564   48766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 19:53:54.058637   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 19:53:54.058668   48766 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 19:53:54.058677   48766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 19:53:54.058713   48766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 19:53:54.058785   48766 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.multinode-197790 san=[127.0.0.1 192.168.39.245 localhost minikube multinode-197790]
	I0829 19:53:54.148397   48766 provision.go:177] copyRemoteCerts
	I0829 19:53:54.148471   48766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:53:54.148499   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHHostname
	I0829 19:53:54.151138   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:53:54.151520   48766 main.go:141] libmachine: (multinode-197790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:87:d9", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:48:29 +0000 UTC Type:0 Mac:52:54:00:97:87:d9 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-197790 Clientid:01:52:54:00:97:87:d9}
	I0829 19:53:54.151565   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined IP address 192.168.39.245 and MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:53:54.151743   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHPort
	I0829 19:53:54.151930   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:53:54.152098   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHUsername
	I0829 19:53:54.152212   48766 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/multinode-197790/id_rsa Username:docker}
	I0829 19:53:54.240090   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0829 19:53:54.240157   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 19:53:54.266933   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0829 19:53:54.267007   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0829 19:53:54.292415   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0829 19:53:54.292490   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 19:53:54.317048   48766 provision.go:87] duration metric: took 264.744034ms to configureAuth
	I0829 19:53:54.317072   48766 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:53:54.317297   48766 config.go:182] Loaded profile config "multinode-197790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:53:54.317373   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHHostname
	I0829 19:53:54.319882   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:53:54.320196   48766 main.go:141] libmachine: (multinode-197790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:87:d9", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:48:29 +0000 UTC Type:0 Mac:52:54:00:97:87:d9 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-197790 Clientid:01:52:54:00:97:87:d9}
	I0829 19:53:54.320222   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined IP address 192.168.39.245 and MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:53:54.320400   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHPort
	I0829 19:53:54.320583   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:53:54.320738   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:53:54.320920   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHUsername
	I0829 19:53:54.321103   48766 main.go:141] libmachine: Using SSH client type: native
	I0829 19:53:54.321261   48766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0829 19:53:54.321274   48766 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:55:25.031749   48766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:55:25.031776   48766 machine.go:96] duration metric: took 1m31.349092982s to provisionDockerMachine
	I0829 19:55:25.031792   48766 start.go:293] postStartSetup for "multinode-197790" (driver="kvm2")
	I0829 19:55:25.031805   48766 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:55:25.031819   48766 main.go:141] libmachine: (multinode-197790) Calling .DriverName
	I0829 19:55:25.032216   48766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:55:25.032246   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHHostname
	I0829 19:55:25.035315   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:55:25.035817   48766 main.go:141] libmachine: (multinode-197790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:87:d9", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:48:29 +0000 UTC Type:0 Mac:52:54:00:97:87:d9 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-197790 Clientid:01:52:54:00:97:87:d9}
	I0829 19:55:25.035846   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined IP address 192.168.39.245 and MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:55:25.035964   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHPort
	I0829 19:55:25.036193   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:55:25.036351   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHUsername
	I0829 19:55:25.036526   48766 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/multinode-197790/id_rsa Username:docker}
	I0829 19:55:25.122645   48766 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:55:25.126815   48766 command_runner.go:130] > NAME=Buildroot
	I0829 19:55:25.126833   48766 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0829 19:55:25.126838   48766 command_runner.go:130] > ID=buildroot
	I0829 19:55:25.126842   48766 command_runner.go:130] > VERSION_ID=2023.02.9
	I0829 19:55:25.126846   48766 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0829 19:55:25.126934   48766 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:55:25.126953   48766 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 19:55:25.127028   48766 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 19:55:25.127130   48766 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 19:55:25.127143   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> /etc/ssl/certs/183612.pem
	I0829 19:55:25.127235   48766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:55:25.136368   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 19:55:25.160988   48766 start.go:296] duration metric: took 129.181338ms for postStartSetup
	I0829 19:55:25.161024   48766 fix.go:56] duration metric: took 1m31.498559008s for fixHost
	I0829 19:55:25.161043   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHHostname
	I0829 19:55:25.163695   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:55:25.164088   48766 main.go:141] libmachine: (multinode-197790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:87:d9", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:48:29 +0000 UTC Type:0 Mac:52:54:00:97:87:d9 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-197790 Clientid:01:52:54:00:97:87:d9}
	I0829 19:55:25.164122   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined IP address 192.168.39.245 and MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:55:25.164262   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHPort
	I0829 19:55:25.164468   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:55:25.164610   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:55:25.164759   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHUsername
	I0829 19:55:25.164910   48766 main.go:141] libmachine: Using SSH client type: native
	I0829 19:55:25.165099   48766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0829 19:55:25.165111   48766 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:55:25.275508   48766 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724961325.255475484
	
	I0829 19:55:25.275526   48766 fix.go:216] guest clock: 1724961325.255475484
	I0829 19:55:25.275532   48766 fix.go:229] Guest: 2024-08-29 19:55:25.255475484 +0000 UTC Remote: 2024-08-29 19:55:25.161028417 +0000 UTC m=+91.620924107 (delta=94.447067ms)
	I0829 19:55:25.275570   48766 fix.go:200] guest clock delta is within tolerance: 94.447067ms
	I0829 19:55:25.275581   48766 start.go:83] releasing machines lock for "multinode-197790", held for 1m31.613132151s
	I0829 19:55:25.275611   48766 main.go:141] libmachine: (multinode-197790) Calling .DriverName
	I0829 19:55:25.275874   48766 main.go:141] libmachine: (multinode-197790) Calling .GetIP
	I0829 19:55:25.278423   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:55:25.278748   48766 main.go:141] libmachine: (multinode-197790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:87:d9", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:48:29 +0000 UTC Type:0 Mac:52:54:00:97:87:d9 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-197790 Clientid:01:52:54:00:97:87:d9}
	I0829 19:55:25.278774   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined IP address 192.168.39.245 and MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:55:25.278870   48766 main.go:141] libmachine: (multinode-197790) Calling .DriverName
	I0829 19:55:25.279359   48766 main.go:141] libmachine: (multinode-197790) Calling .DriverName
	I0829 19:55:25.279604   48766 main.go:141] libmachine: (multinode-197790) Calling .DriverName
	I0829 19:55:25.279697   48766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:55:25.279725   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHHostname
	I0829 19:55:25.279807   48766 ssh_runner.go:195] Run: cat /version.json
	I0829 19:55:25.279829   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHHostname
	I0829 19:55:25.282233   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:55:25.282351   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:55:25.282677   48766 main.go:141] libmachine: (multinode-197790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:87:d9", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:48:29 +0000 UTC Type:0 Mac:52:54:00:97:87:d9 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-197790 Clientid:01:52:54:00:97:87:d9}
	I0829 19:55:25.282720   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined IP address 192.168.39.245 and MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:55:25.282751   48766 main.go:141] libmachine: (multinode-197790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:87:d9", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:48:29 +0000 UTC Type:0 Mac:52:54:00:97:87:d9 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-197790 Clientid:01:52:54:00:97:87:d9}
	I0829 19:55:25.282779   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined IP address 192.168.39.245 and MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:55:25.282947   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHPort
	I0829 19:55:25.282964   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHPort
	I0829 19:55:25.283126   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:55:25.283132   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:55:25.283287   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHUsername
	I0829 19:55:25.283348   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHUsername
	I0829 19:55:25.283454   48766 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/multinode-197790/id_rsa Username:docker}
	I0829 19:55:25.283541   48766 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/multinode-197790/id_rsa Username:docker}
	I0829 19:55:25.385741   48766 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0829 19:55:25.386447   48766 command_runner.go:130] > {"iso_version": "v1.33.1-1724862017-19530", "kicbase_version": "v0.0.44-1724775115-19521", "minikube_version": "v1.33.1", "commit": "0ce952d110f81b7b94ba20c385955675855b59fb"}
	I0829 19:55:25.386600   48766 ssh_runner.go:195] Run: systemctl --version
	I0829 19:55:25.392176   48766 command_runner.go:130] > systemd 252 (252)
	I0829 19:55:25.392216   48766 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0829 19:55:25.392495   48766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:55:25.560917   48766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0829 19:55:25.566779   48766 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0829 19:55:25.566954   48766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:55:25.567006   48766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:55:25.575941   48766 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0829 19:55:25.575958   48766 start.go:495] detecting cgroup driver to use...
	I0829 19:55:25.576027   48766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:55:25.591692   48766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:55:25.604945   48766 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:55:25.604982   48766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:55:25.617955   48766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:55:25.631228   48766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:55:25.770427   48766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:55:25.909041   48766 docker.go:233] disabling docker service ...
	I0829 19:55:25.909112   48766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:55:25.925274   48766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:55:25.938514   48766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:55:26.076421   48766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:55:26.215564   48766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:55:26.229329   48766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:55:26.249041   48766 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0829 19:55:26.249090   48766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:55:26.249145   48766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:55:26.259327   48766 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:55:26.259382   48766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:55:26.269518   48766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:55:26.279712   48766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:55:26.289841   48766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:55:26.300229   48766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:55:26.310109   48766 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:55:26.321138   48766 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:55:26.331252   48766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:55:26.340209   48766 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0829 19:55:26.340402   48766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:55:26.349423   48766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:55:26.480614   48766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:55:26.677782   48766 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:55:26.677847   48766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:55:26.682832   48766 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0829 19:55:26.682849   48766 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0829 19:55:26.682861   48766 command_runner.go:130] > Device: 0,22	Inode: 1322        Links: 1
	I0829 19:55:26.682870   48766 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0829 19:55:26.682877   48766 command_runner.go:130] > Access: 2024-08-29 19:55:26.553972784 +0000
	I0829 19:55:26.682888   48766 command_runner.go:130] > Modify: 2024-08-29 19:55:26.553972784 +0000
	I0829 19:55:26.682895   48766 command_runner.go:130] > Change: 2024-08-29 19:55:26.553972784 +0000
	I0829 19:55:26.682901   48766 command_runner.go:130] >  Birth: -
	I0829 19:55:26.683005   48766 start.go:563] Will wait 60s for crictl version
	I0829 19:55:26.683059   48766 ssh_runner.go:195] Run: which crictl
	I0829 19:55:26.687196   48766 command_runner.go:130] > /usr/bin/crictl
	I0829 19:55:26.687248   48766 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:55:26.732137   48766 command_runner.go:130] > Version:  0.1.0
	I0829 19:55:26.732159   48766 command_runner.go:130] > RuntimeName:  cri-o
	I0829 19:55:26.732166   48766 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0829 19:55:26.732173   48766 command_runner.go:130] > RuntimeApiVersion:  v1
	I0829 19:55:26.732244   48766 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:55:26.732345   48766 ssh_runner.go:195] Run: crio --version
	I0829 19:55:26.766891   48766 command_runner.go:130] > crio version 1.29.1
	I0829 19:55:26.766913   48766 command_runner.go:130] > Version:        1.29.1
	I0829 19:55:26.766921   48766 command_runner.go:130] > GitCommit:      unknown
	I0829 19:55:26.766927   48766 command_runner.go:130] > GitCommitDate:  unknown
	I0829 19:55:26.766932   48766 command_runner.go:130] > GitTreeState:   clean
	I0829 19:55:26.766940   48766 command_runner.go:130] > BuildDate:      2024-08-28T21:33:51Z
	I0829 19:55:26.766946   48766 command_runner.go:130] > GoVersion:      go1.21.6
	I0829 19:55:26.766953   48766 command_runner.go:130] > Compiler:       gc
	I0829 19:55:26.766960   48766 command_runner.go:130] > Platform:       linux/amd64
	I0829 19:55:26.766969   48766 command_runner.go:130] > Linkmode:       dynamic
	I0829 19:55:26.766978   48766 command_runner.go:130] > BuildTags:      
	I0829 19:55:26.766986   48766 command_runner.go:130] >   containers_image_ostree_stub
	I0829 19:55:26.766996   48766 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0829 19:55:26.767003   48766 command_runner.go:130] >   btrfs_noversion
	I0829 19:55:26.767011   48766 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0829 19:55:26.767019   48766 command_runner.go:130] >   libdm_no_deferred_remove
	I0829 19:55:26.767024   48766 command_runner.go:130] >   seccomp
	I0829 19:55:26.767032   48766 command_runner.go:130] > LDFlags:          unknown
	I0829 19:55:26.767041   48766 command_runner.go:130] > SeccompEnabled:   true
	I0829 19:55:26.767048   48766 command_runner.go:130] > AppArmorEnabled:  false
	I0829 19:55:26.767116   48766 ssh_runner.go:195] Run: crio --version
	I0829 19:55:26.794412   48766 command_runner.go:130] > crio version 1.29.1
	I0829 19:55:26.794431   48766 command_runner.go:130] > Version:        1.29.1
	I0829 19:55:26.794439   48766 command_runner.go:130] > GitCommit:      unknown
	I0829 19:55:26.794445   48766 command_runner.go:130] > GitCommitDate:  unknown
	I0829 19:55:26.794450   48766 command_runner.go:130] > GitTreeState:   clean
	I0829 19:55:26.794458   48766 command_runner.go:130] > BuildDate:      2024-08-28T21:33:51Z
	I0829 19:55:26.794464   48766 command_runner.go:130] > GoVersion:      go1.21.6
	I0829 19:55:26.794469   48766 command_runner.go:130] > Compiler:       gc
	I0829 19:55:26.794475   48766 command_runner.go:130] > Platform:       linux/amd64
	I0829 19:55:26.794481   48766 command_runner.go:130] > Linkmode:       dynamic
	I0829 19:55:26.794489   48766 command_runner.go:130] > BuildTags:      
	I0829 19:55:26.794495   48766 command_runner.go:130] >   containers_image_ostree_stub
	I0829 19:55:26.794502   48766 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0829 19:55:26.794512   48766 command_runner.go:130] >   btrfs_noversion
	I0829 19:55:26.794520   48766 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0829 19:55:26.794529   48766 command_runner.go:130] >   libdm_no_deferred_remove
	I0829 19:55:26.794552   48766 command_runner.go:130] >   seccomp
	I0829 19:55:26.794560   48766 command_runner.go:130] > LDFlags:          unknown
	I0829 19:55:26.794567   48766 command_runner.go:130] > SeccompEnabled:   true
	I0829 19:55:26.794574   48766 command_runner.go:130] > AppArmorEnabled:  false
	I0829 19:55:26.797913   48766 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:55:26.799368   48766 main.go:141] libmachine: (multinode-197790) Calling .GetIP
	I0829 19:55:26.801730   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:55:26.802036   48766 main.go:141] libmachine: (multinode-197790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:87:d9", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:48:29 +0000 UTC Type:0 Mac:52:54:00:97:87:d9 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-197790 Clientid:01:52:54:00:97:87:d9}
	I0829 19:55:26.802056   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined IP address 192.168.39.245 and MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:55:26.802241   48766 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 19:55:26.806604   48766 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0829 19:55:26.806822   48766 kubeadm.go:883] updating cluster {Name:multinode-197790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-197790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.247 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.131 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:55:26.806980   48766 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:55:26.807034   48766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:55:26.854814   48766 command_runner.go:130] > {
	I0829 19:55:26.854833   48766 command_runner.go:130] >   "images": [
	I0829 19:55:26.854838   48766 command_runner.go:130] >     {
	I0829 19:55:26.854846   48766 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0829 19:55:26.854851   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.854860   48766 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0829 19:55:26.854866   48766 command_runner.go:130] >       ],
	I0829 19:55:26.854873   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.854885   48766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0829 19:55:26.854896   48766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0829 19:55:26.854900   48766 command_runner.go:130] >       ],
	I0829 19:55:26.854905   48766 command_runner.go:130] >       "size": "87165492",
	I0829 19:55:26.854910   48766 command_runner.go:130] >       "uid": null,
	I0829 19:55:26.854914   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.854920   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.854925   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.854928   48766 command_runner.go:130] >     },
	I0829 19:55:26.854932   48766 command_runner.go:130] >     {
	I0829 19:55:26.854938   48766 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0829 19:55:26.854947   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.854955   48766 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0829 19:55:26.854964   48766 command_runner.go:130] >       ],
	I0829 19:55:26.854971   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.854986   48766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0829 19:55:26.854993   48766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0829 19:55:26.854998   48766 command_runner.go:130] >       ],
	I0829 19:55:26.855002   48766 command_runner.go:130] >       "size": "87190579",
	I0829 19:55:26.855008   48766 command_runner.go:130] >       "uid": null,
	I0829 19:55:26.855020   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.855028   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.855036   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.855045   48766 command_runner.go:130] >     },
	I0829 19:55:26.855051   48766 command_runner.go:130] >     {
	I0829 19:55:26.855063   48766 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0829 19:55:26.855072   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.855081   48766 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0829 19:55:26.855088   48766 command_runner.go:130] >       ],
	I0829 19:55:26.855093   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.855100   48766 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0829 19:55:26.855111   48766 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0829 19:55:26.855117   48766 command_runner.go:130] >       ],
	I0829 19:55:26.855124   48766 command_runner.go:130] >       "size": "1363676",
	I0829 19:55:26.855134   48766 command_runner.go:130] >       "uid": null,
	I0829 19:55:26.855144   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.855152   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.855161   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.855170   48766 command_runner.go:130] >     },
	I0829 19:55:26.855178   48766 command_runner.go:130] >     {
	I0829 19:55:26.855186   48766 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0829 19:55:26.855192   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.855201   48766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0829 19:55:26.855210   48766 command_runner.go:130] >       ],
	I0829 19:55:26.855220   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.855235   48766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0829 19:55:26.855255   48766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0829 19:55:26.855264   48766 command_runner.go:130] >       ],
	I0829 19:55:26.855269   48766 command_runner.go:130] >       "size": "31470524",
	I0829 19:55:26.855273   48766 command_runner.go:130] >       "uid": null,
	I0829 19:55:26.855277   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.855282   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.855291   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.855300   48766 command_runner.go:130] >     },
	I0829 19:55:26.855309   48766 command_runner.go:130] >     {
	I0829 19:55:26.855319   48766 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0829 19:55:26.855329   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.855356   48766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0829 19:55:26.855367   48766 command_runner.go:130] >       ],
	I0829 19:55:26.855374   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.855398   48766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0829 19:55:26.855414   48766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0829 19:55:26.855423   48766 command_runner.go:130] >       ],
	I0829 19:55:26.855433   48766 command_runner.go:130] >       "size": "61245718",
	I0829 19:55:26.855441   48766 command_runner.go:130] >       "uid": null,
	I0829 19:55:26.855448   48766 command_runner.go:130] >       "username": "nonroot",
	I0829 19:55:26.855453   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.855463   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.855472   48766 command_runner.go:130] >     },
	I0829 19:55:26.855481   48766 command_runner.go:130] >     {
	I0829 19:55:26.855493   48766 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0829 19:55:26.855502   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.855513   48766 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0829 19:55:26.855522   48766 command_runner.go:130] >       ],
	I0829 19:55:26.855529   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.855537   48766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0829 19:55:26.855551   48766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0829 19:55:26.855560   48766 command_runner.go:130] >       ],
	I0829 19:55:26.855570   48766 command_runner.go:130] >       "size": "149009664",
	I0829 19:55:26.855579   48766 command_runner.go:130] >       "uid": {
	I0829 19:55:26.855587   48766 command_runner.go:130] >         "value": "0"
	I0829 19:55:26.855596   48766 command_runner.go:130] >       },
	I0829 19:55:26.855605   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.855613   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.855620   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.855624   48766 command_runner.go:130] >     },
	I0829 19:55:26.855635   48766 command_runner.go:130] >     {
	I0829 19:55:26.855649   48766 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0829 19:55:26.855659   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.855669   48766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0829 19:55:26.855677   48766 command_runner.go:130] >       ],
	I0829 19:55:26.855684   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.855699   48766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0829 19:55:26.855709   48766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0829 19:55:26.855720   48766 command_runner.go:130] >       ],
	I0829 19:55:26.855730   48766 command_runner.go:130] >       "size": "95233506",
	I0829 19:55:26.855739   48766 command_runner.go:130] >       "uid": {
	I0829 19:55:26.855748   48766 command_runner.go:130] >         "value": "0"
	I0829 19:55:26.855754   48766 command_runner.go:130] >       },
	I0829 19:55:26.855763   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.855773   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.855782   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.855789   48766 command_runner.go:130] >     },
	I0829 19:55:26.855793   48766 command_runner.go:130] >     {
	I0829 19:55:26.855802   48766 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0829 19:55:26.855812   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.855824   48766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0829 19:55:26.855833   48766 command_runner.go:130] >       ],
	I0829 19:55:26.855842   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.855864   48766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0829 19:55:26.855875   48766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0829 19:55:26.855883   48766 command_runner.go:130] >       ],
	I0829 19:55:26.855894   48766 command_runner.go:130] >       "size": "89437512",
	I0829 19:55:26.855903   48766 command_runner.go:130] >       "uid": {
	I0829 19:55:26.855913   48766 command_runner.go:130] >         "value": "0"
	I0829 19:55:26.855921   48766 command_runner.go:130] >       },
	I0829 19:55:26.855930   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.855936   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.855942   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.855947   48766 command_runner.go:130] >     },
	I0829 19:55:26.855952   48766 command_runner.go:130] >     {
	I0829 19:55:26.855959   48766 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0829 19:55:26.855963   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.855970   48766 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0829 19:55:26.855975   48766 command_runner.go:130] >       ],
	I0829 19:55:26.855982   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.855993   48766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0829 19:55:26.856004   48766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0829 19:55:26.856018   48766 command_runner.go:130] >       ],
	I0829 19:55:26.856029   48766 command_runner.go:130] >       "size": "92728217",
	I0829 19:55:26.856037   48766 command_runner.go:130] >       "uid": null,
	I0829 19:55:26.856045   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.856049   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.856056   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.856064   48766 command_runner.go:130] >     },
	I0829 19:55:26.856072   48766 command_runner.go:130] >     {
	I0829 19:55:26.856082   48766 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0829 19:55:26.856092   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.856103   48766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0829 19:55:26.856111   48766 command_runner.go:130] >       ],
	I0829 19:55:26.856118   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.856130   48766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0829 19:55:26.856146   48766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0829 19:55:26.856155   48766 command_runner.go:130] >       ],
	I0829 19:55:26.856164   48766 command_runner.go:130] >       "size": "68420936",
	I0829 19:55:26.856173   48766 command_runner.go:130] >       "uid": {
	I0829 19:55:26.856182   48766 command_runner.go:130] >         "value": "0"
	I0829 19:55:26.856190   48766 command_runner.go:130] >       },
	I0829 19:55:26.856196   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.856205   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.856212   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.856217   48766 command_runner.go:130] >     },
	I0829 19:55:26.856221   48766 command_runner.go:130] >     {
	I0829 19:55:26.856230   48766 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0829 19:55:26.856240   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.856250   48766 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0829 19:55:26.856258   48766 command_runner.go:130] >       ],
	I0829 19:55:26.856268   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.856282   48766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0829 19:55:26.856295   48766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0829 19:55:26.856301   48766 command_runner.go:130] >       ],
	I0829 19:55:26.856305   48766 command_runner.go:130] >       "size": "742080",
	I0829 19:55:26.856310   48766 command_runner.go:130] >       "uid": {
	I0829 19:55:26.856320   48766 command_runner.go:130] >         "value": "65535"
	I0829 19:55:26.856329   48766 command_runner.go:130] >       },
	I0829 19:55:26.856339   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.856349   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.856359   48766 command_runner.go:130] >       "pinned": true
	I0829 19:55:26.856368   48766 command_runner.go:130] >     }
	I0829 19:55:26.856376   48766 command_runner.go:130] >   ]
	I0829 19:55:26.856383   48766 command_runner.go:130] > }
	I0829 19:55:26.856597   48766 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:55:26.856612   48766 crio.go:433] Images already preloaded, skipping extraction
	I0829 19:55:26.856665   48766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:55:26.888044   48766 command_runner.go:130] > {
	I0829 19:55:26.888065   48766 command_runner.go:130] >   "images": [
	I0829 19:55:26.888071   48766 command_runner.go:130] >     {
	I0829 19:55:26.888084   48766 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0829 19:55:26.888092   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.888104   48766 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0829 19:55:26.888113   48766 command_runner.go:130] >       ],
	I0829 19:55:26.888119   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.888129   48766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0829 19:55:26.888138   48766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0829 19:55:26.888142   48766 command_runner.go:130] >       ],
	I0829 19:55:26.888147   48766 command_runner.go:130] >       "size": "87165492",
	I0829 19:55:26.888151   48766 command_runner.go:130] >       "uid": null,
	I0829 19:55:26.888158   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.888168   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.888178   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.888188   48766 command_runner.go:130] >     },
	I0829 19:55:26.888193   48766 command_runner.go:130] >     {
	I0829 19:55:26.888202   48766 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0829 19:55:26.888206   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.888212   48766 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0829 19:55:26.888216   48766 command_runner.go:130] >       ],
	I0829 19:55:26.888220   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.888227   48766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0829 19:55:26.888237   48766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0829 19:55:26.888240   48766 command_runner.go:130] >       ],
	I0829 19:55:26.888248   48766 command_runner.go:130] >       "size": "87190579",
	I0829 19:55:26.888258   48766 command_runner.go:130] >       "uid": null,
	I0829 19:55:26.888272   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.888281   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.888290   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.888299   48766 command_runner.go:130] >     },
	I0829 19:55:26.888304   48766 command_runner.go:130] >     {
	I0829 19:55:26.888312   48766 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0829 19:55:26.888318   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.888323   48766 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0829 19:55:26.888331   48766 command_runner.go:130] >       ],
	I0829 19:55:26.888341   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.888355   48766 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0829 19:55:26.888370   48766 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0829 19:55:26.888379   48766 command_runner.go:130] >       ],
	I0829 19:55:26.888389   48766 command_runner.go:130] >       "size": "1363676",
	I0829 19:55:26.888399   48766 command_runner.go:130] >       "uid": null,
	I0829 19:55:26.888407   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.888411   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.888420   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.888429   48766 command_runner.go:130] >     },
	I0829 19:55:26.888438   48766 command_runner.go:130] >     {
	I0829 19:55:26.888450   48766 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0829 19:55:26.888459   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.888471   48766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0829 19:55:26.888476   48766 command_runner.go:130] >       ],
	I0829 19:55:26.888485   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.888494   48766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0829 19:55:26.888513   48766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0829 19:55:26.888522   48766 command_runner.go:130] >       ],
	I0829 19:55:26.888529   48766 command_runner.go:130] >       "size": "31470524",
	I0829 19:55:26.888536   48766 command_runner.go:130] >       "uid": null,
	I0829 19:55:26.888546   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.888554   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.888564   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.888571   48766 command_runner.go:130] >     },
	I0829 19:55:26.888575   48766 command_runner.go:130] >     {
	I0829 19:55:26.888587   48766 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0829 19:55:26.888597   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.888608   48766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0829 19:55:26.888614   48766 command_runner.go:130] >       ],
	I0829 19:55:26.888624   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.888647   48766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0829 19:55:26.888659   48766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0829 19:55:26.888666   48766 command_runner.go:130] >       ],
	I0829 19:55:26.888672   48766 command_runner.go:130] >       "size": "61245718",
	I0829 19:55:26.888682   48766 command_runner.go:130] >       "uid": null,
	I0829 19:55:26.888692   48766 command_runner.go:130] >       "username": "nonroot",
	I0829 19:55:26.888701   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.888710   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.888718   48766 command_runner.go:130] >     },
	I0829 19:55:26.888723   48766 command_runner.go:130] >     {
	I0829 19:55:26.888737   48766 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0829 19:55:26.888745   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.888750   48766 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0829 19:55:26.888758   48766 command_runner.go:130] >       ],
	I0829 19:55:26.888767   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.888782   48766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0829 19:55:26.888796   48766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0829 19:55:26.888804   48766 command_runner.go:130] >       ],
	I0829 19:55:26.888813   48766 command_runner.go:130] >       "size": "149009664",
	I0829 19:55:26.888821   48766 command_runner.go:130] >       "uid": {
	I0829 19:55:26.888829   48766 command_runner.go:130] >         "value": "0"
	I0829 19:55:26.888832   48766 command_runner.go:130] >       },
	I0829 19:55:26.888841   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.888850   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.888860   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.888868   48766 command_runner.go:130] >     },
	I0829 19:55:26.888874   48766 command_runner.go:130] >     {
	I0829 19:55:26.888886   48766 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0829 19:55:26.888895   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.888904   48766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0829 19:55:26.888912   48766 command_runner.go:130] >       ],
	I0829 19:55:26.888916   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.888928   48766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0829 19:55:26.888942   48766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0829 19:55:26.888951   48766 command_runner.go:130] >       ],
	I0829 19:55:26.888958   48766 command_runner.go:130] >       "size": "95233506",
	I0829 19:55:26.888966   48766 command_runner.go:130] >       "uid": {
	I0829 19:55:26.888973   48766 command_runner.go:130] >         "value": "0"
	I0829 19:55:26.888981   48766 command_runner.go:130] >       },
	I0829 19:55:26.888988   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.888996   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.889000   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.889006   48766 command_runner.go:130] >     },
	I0829 19:55:26.889012   48766 command_runner.go:130] >     {
	I0829 19:55:26.889025   48766 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0829 19:55:26.889032   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.889044   48766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0829 19:55:26.889050   48766 command_runner.go:130] >       ],
	I0829 19:55:26.889057   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.889077   48766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0829 19:55:26.889094   48766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0829 19:55:26.889103   48766 command_runner.go:130] >       ],
	I0829 19:55:26.889110   48766 command_runner.go:130] >       "size": "89437512",
	I0829 19:55:26.889119   48766 command_runner.go:130] >       "uid": {
	I0829 19:55:26.889125   48766 command_runner.go:130] >         "value": "0"
	I0829 19:55:26.889131   48766 command_runner.go:130] >       },
	I0829 19:55:26.889163   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.889175   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.889181   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.889186   48766 command_runner.go:130] >     },
	I0829 19:55:26.889195   48766 command_runner.go:130] >     {
	I0829 19:55:26.889208   48766 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0829 19:55:26.889217   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.889225   48766 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0829 19:55:26.889233   48766 command_runner.go:130] >       ],
	I0829 19:55:26.889239   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.889253   48766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0829 19:55:26.889267   48766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0829 19:55:26.889276   48766 command_runner.go:130] >       ],
	I0829 19:55:26.889282   48766 command_runner.go:130] >       "size": "92728217",
	I0829 19:55:26.889292   48766 command_runner.go:130] >       "uid": null,
	I0829 19:55:26.889302   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.889310   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.889316   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.889324   48766 command_runner.go:130] >     },
	I0829 19:55:26.889330   48766 command_runner.go:130] >     {
	I0829 19:55:26.889339   48766 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0829 19:55:26.889343   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.889351   48766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0829 19:55:26.889360   48766 command_runner.go:130] >       ],
	I0829 19:55:26.889367   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.889381   48766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0829 19:55:26.889396   48766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0829 19:55:26.889404   48766 command_runner.go:130] >       ],
	I0829 19:55:26.889411   48766 command_runner.go:130] >       "size": "68420936",
	I0829 19:55:26.889419   48766 command_runner.go:130] >       "uid": {
	I0829 19:55:26.889424   48766 command_runner.go:130] >         "value": "0"
	I0829 19:55:26.889429   48766 command_runner.go:130] >       },
	I0829 19:55:26.889435   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.889444   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.889451   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.889459   48766 command_runner.go:130] >     },
	I0829 19:55:26.889465   48766 command_runner.go:130] >     {
	I0829 19:55:26.889480   48766 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0829 19:55:26.889489   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.889496   48766 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0829 19:55:26.889504   48766 command_runner.go:130] >       ],
	I0829 19:55:26.889509   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.889518   48766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0829 19:55:26.889532   48766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0829 19:55:26.889542   48766 command_runner.go:130] >       ],
	I0829 19:55:26.889548   48766 command_runner.go:130] >       "size": "742080",
	I0829 19:55:26.889557   48766 command_runner.go:130] >       "uid": {
	I0829 19:55:26.889564   48766 command_runner.go:130] >         "value": "65535"
	I0829 19:55:26.889572   48766 command_runner.go:130] >       },
	I0829 19:55:26.889579   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.889588   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.889595   48766 command_runner.go:130] >       "pinned": true
	I0829 19:55:26.889599   48766 command_runner.go:130] >     }
	I0829 19:55:26.889602   48766 command_runner.go:130] >   ]
	I0829 19:55:26.889607   48766 command_runner.go:130] > }
	I0829 19:55:26.889774   48766 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:55:26.889788   48766 cache_images.go:84] Images are preloaded, skipping loading
	I0829 19:55:26.889799   48766 kubeadm.go:934] updating node { 192.168.39.245 8443 v1.31.0 crio true true} ...
	I0829 19:55:26.889930   48766 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-197790 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-197790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:55:26.890014   48766 ssh_runner.go:195] Run: crio config
	I0829 19:55:26.923009   48766 command_runner.go:130] ! time="2024-08-29 19:55:26.903204041Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0829 19:55:26.928762   48766 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0829 19:55:26.942946   48766 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0829 19:55:26.942974   48766 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0829 19:55:26.942983   48766 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0829 19:55:26.942988   48766 command_runner.go:130] > #
	I0829 19:55:26.943000   48766 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0829 19:55:26.943011   48766 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0829 19:55:26.943020   48766 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0829 19:55:26.943030   48766 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0829 19:55:26.943036   48766 command_runner.go:130] > # reload'.
	I0829 19:55:26.943044   48766 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0829 19:55:26.943056   48766 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0829 19:55:26.943067   48766 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0829 19:55:26.943076   48766 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0829 19:55:26.943086   48766 command_runner.go:130] > [crio]
	I0829 19:55:26.943096   48766 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0829 19:55:26.943106   48766 command_runner.go:130] > # containers images, in this directory.
	I0829 19:55:26.943113   48766 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0829 19:55:26.943132   48766 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0829 19:55:26.943143   48766 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0829 19:55:26.943155   48766 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0829 19:55:26.943165   48766 command_runner.go:130] > # imagestore = ""
	I0829 19:55:26.943174   48766 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0829 19:55:26.943186   48766 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0829 19:55:26.943193   48766 command_runner.go:130] > storage_driver = "overlay"
	I0829 19:55:26.943205   48766 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0829 19:55:26.943218   48766 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0829 19:55:26.943228   48766 command_runner.go:130] > storage_option = [
	I0829 19:55:26.943235   48766 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0829 19:55:26.943242   48766 command_runner.go:130] > ]
	I0829 19:55:26.943253   48766 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0829 19:55:26.943264   48766 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0829 19:55:26.943274   48766 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0829 19:55:26.943290   48766 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0829 19:55:26.943303   48766 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0829 19:55:26.943312   48766 command_runner.go:130] > # always happen on a node reboot
	I0829 19:55:26.943320   48766 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0829 19:55:26.943339   48766 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0829 19:55:26.943352   48766 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0829 19:55:26.943360   48766 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0829 19:55:26.943372   48766 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0829 19:55:26.943387   48766 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0829 19:55:26.943402   48766 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0829 19:55:26.943410   48766 command_runner.go:130] > # internal_wipe = true
	I0829 19:55:26.943421   48766 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0829 19:55:26.943432   48766 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0829 19:55:26.943441   48766 command_runner.go:130] > # internal_repair = false
	I0829 19:55:26.943449   48766 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0829 19:55:26.943462   48766 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0829 19:55:26.943473   48766 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0829 19:55:26.943480   48766 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0829 19:55:26.943492   48766 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0829 19:55:26.943500   48766 command_runner.go:130] > [crio.api]
	I0829 19:55:26.943507   48766 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0829 19:55:26.943516   48766 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0829 19:55:26.943524   48766 command_runner.go:130] > # IP address on which the stream server will listen.
	I0829 19:55:26.943533   48766 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0829 19:55:26.943543   48766 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0829 19:55:26.943551   48766 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0829 19:55:26.943556   48766 command_runner.go:130] > # stream_port = "0"
	I0829 19:55:26.943563   48766 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0829 19:55:26.943571   48766 command_runner.go:130] > # stream_enable_tls = false
	I0829 19:55:26.943594   48766 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0829 19:55:26.943599   48766 command_runner.go:130] > # stream_idle_timeout = ""
	I0829 19:55:26.943605   48766 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0829 19:55:26.943611   48766 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0829 19:55:26.943617   48766 command_runner.go:130] > # minutes.
	I0829 19:55:26.943621   48766 command_runner.go:130] > # stream_tls_cert = ""
	I0829 19:55:26.943627   48766 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0829 19:55:26.943635   48766 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0829 19:55:26.943639   48766 command_runner.go:130] > # stream_tls_key = ""
	I0829 19:55:26.943647   48766 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0829 19:55:26.943653   48766 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0829 19:55:26.943670   48766 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0829 19:55:26.943677   48766 command_runner.go:130] > # stream_tls_ca = ""
	I0829 19:55:26.943684   48766 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0829 19:55:26.943691   48766 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0829 19:55:26.943698   48766 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0829 19:55:26.943704   48766 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0829 19:55:26.943711   48766 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0829 19:55:26.943718   48766 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0829 19:55:26.943722   48766 command_runner.go:130] > [crio.runtime]
	I0829 19:55:26.943730   48766 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0829 19:55:26.943735   48766 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0829 19:55:26.943741   48766 command_runner.go:130] > # "nofile=1024:2048"
	I0829 19:55:26.943747   48766 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0829 19:55:26.943753   48766 command_runner.go:130] > # default_ulimits = [
	I0829 19:55:26.943756   48766 command_runner.go:130] > # ]
	I0829 19:55:26.943761   48766 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0829 19:55:26.943767   48766 command_runner.go:130] > # no_pivot = false
	I0829 19:55:26.943773   48766 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0829 19:55:26.943779   48766 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0829 19:55:26.943784   48766 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0829 19:55:26.943790   48766 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0829 19:55:26.943797   48766 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0829 19:55:26.943803   48766 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0829 19:55:26.943809   48766 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0829 19:55:26.943814   48766 command_runner.go:130] > # Cgroup setting for conmon
	I0829 19:55:26.943824   48766 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0829 19:55:26.943830   48766 command_runner.go:130] > conmon_cgroup = "pod"
	I0829 19:55:26.943839   48766 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0829 19:55:26.943843   48766 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0829 19:55:26.943852   48766 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0829 19:55:26.943856   48766 command_runner.go:130] > conmon_env = [
	I0829 19:55:26.943862   48766 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0829 19:55:26.943866   48766 command_runner.go:130] > ]
	I0829 19:55:26.943871   48766 command_runner.go:130] > # Additional environment variables to set for all the
	I0829 19:55:26.943878   48766 command_runner.go:130] > # containers. These are overridden if set in the
	I0829 19:55:26.943884   48766 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0829 19:55:26.943889   48766 command_runner.go:130] > # default_env = [
	I0829 19:55:26.943893   48766 command_runner.go:130] > # ]
	I0829 19:55:26.943898   48766 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0829 19:55:26.943906   48766 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0829 19:55:26.943912   48766 command_runner.go:130] > # selinux = false
	I0829 19:55:26.943918   48766 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0829 19:55:26.943927   48766 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0829 19:55:26.943933   48766 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0829 19:55:26.943937   48766 command_runner.go:130] > # seccomp_profile = ""
	I0829 19:55:26.943942   48766 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0829 19:55:26.943950   48766 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0829 19:55:26.943964   48766 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0829 19:55:26.943971   48766 command_runner.go:130] > # which might increase security.
	I0829 19:55:26.943975   48766 command_runner.go:130] > # This option is currently deprecated,
	I0829 19:55:26.943980   48766 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0829 19:55:26.943987   48766 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0829 19:55:26.943993   48766 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0829 19:55:26.944001   48766 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0829 19:55:26.944007   48766 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0829 19:55:26.944015   48766 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0829 19:55:26.944019   48766 command_runner.go:130] > # This option supports live configuration reload.
	I0829 19:55:26.944026   48766 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0829 19:55:26.944031   48766 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0829 19:55:26.944037   48766 command_runner.go:130] > # the cgroup blockio controller.
	I0829 19:55:26.944043   48766 command_runner.go:130] > # blockio_config_file = ""
	I0829 19:55:26.944052   48766 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0829 19:55:26.944055   48766 command_runner.go:130] > # blockio parameters.
	I0829 19:55:26.944059   48766 command_runner.go:130] > # blockio_reload = false
	I0829 19:55:26.944065   48766 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0829 19:55:26.944073   48766 command_runner.go:130] > # irqbalance daemon.
	I0829 19:55:26.944078   48766 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0829 19:55:26.944084   48766 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0829 19:55:26.944092   48766 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0829 19:55:26.944099   48766 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0829 19:55:26.944107   48766 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0829 19:55:26.944114   48766 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0829 19:55:26.944121   48766 command_runner.go:130] > # This option supports live configuration reload.
	I0829 19:55:26.944126   48766 command_runner.go:130] > # rdt_config_file = ""
	I0829 19:55:26.944133   48766 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0829 19:55:26.944137   48766 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0829 19:55:26.944183   48766 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0829 19:55:26.944197   48766 command_runner.go:130] > # separate_pull_cgroup = ""
	I0829 19:55:26.944206   48766 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0829 19:55:26.944219   48766 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0829 19:55:26.944228   48766 command_runner.go:130] > # will be added.
	I0829 19:55:26.944238   48766 command_runner.go:130] > # default_capabilities = [
	I0829 19:55:26.944244   48766 command_runner.go:130] > # 	"CHOWN",
	I0829 19:55:26.944251   48766 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0829 19:55:26.944255   48766 command_runner.go:130] > # 	"FSETID",
	I0829 19:55:26.944258   48766 command_runner.go:130] > # 	"FOWNER",
	I0829 19:55:26.944264   48766 command_runner.go:130] > # 	"SETGID",
	I0829 19:55:26.944268   48766 command_runner.go:130] > # 	"SETUID",
	I0829 19:55:26.944271   48766 command_runner.go:130] > # 	"SETPCAP",
	I0829 19:55:26.944276   48766 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0829 19:55:26.944280   48766 command_runner.go:130] > # 	"KILL",
	I0829 19:55:26.944283   48766 command_runner.go:130] > # ]
	I0829 19:55:26.944291   48766 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0829 19:55:26.944300   48766 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0829 19:55:26.944304   48766 command_runner.go:130] > # add_inheritable_capabilities = false
	I0829 19:55:26.944310   48766 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0829 19:55:26.944317   48766 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0829 19:55:26.944325   48766 command_runner.go:130] > default_sysctls = [
	I0829 19:55:26.944329   48766 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0829 19:55:26.944335   48766 command_runner.go:130] > ]
	I0829 19:55:26.944339   48766 command_runner.go:130] > # List of devices on the host that a
	I0829 19:55:26.944345   48766 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0829 19:55:26.944350   48766 command_runner.go:130] > # allowed_devices = [
	I0829 19:55:26.944354   48766 command_runner.go:130] > # 	"/dev/fuse",
	I0829 19:55:26.944359   48766 command_runner.go:130] > # ]
	I0829 19:55:26.944364   48766 command_runner.go:130] > # List of additional devices. specified as
	I0829 19:55:26.944371   48766 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0829 19:55:26.944378   48766 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0829 19:55:26.944384   48766 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0829 19:55:26.944390   48766 command_runner.go:130] > # additional_devices = [
	I0829 19:55:26.944392   48766 command_runner.go:130] > # ]
	I0829 19:55:26.944398   48766 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0829 19:55:26.944403   48766 command_runner.go:130] > # cdi_spec_dirs = [
	I0829 19:55:26.944407   48766 command_runner.go:130] > # 	"/etc/cdi",
	I0829 19:55:26.944411   48766 command_runner.go:130] > # 	"/var/run/cdi",
	I0829 19:55:26.944414   48766 command_runner.go:130] > # ]
	I0829 19:55:26.944419   48766 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0829 19:55:26.944427   48766 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0829 19:55:26.944431   48766 command_runner.go:130] > # Defaults to false.
	I0829 19:55:26.944436   48766 command_runner.go:130] > # device_ownership_from_security_context = false
	I0829 19:55:26.944444   48766 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0829 19:55:26.944450   48766 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0829 19:55:26.944453   48766 command_runner.go:130] > # hooks_dir = [
	I0829 19:55:26.944457   48766 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0829 19:55:26.944461   48766 command_runner.go:130] > # ]
	I0829 19:55:26.944466   48766 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0829 19:55:26.944474   48766 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0829 19:55:26.944479   48766 command_runner.go:130] > # its default mounts from the following two files:
	I0829 19:55:26.944484   48766 command_runner.go:130] > #
	I0829 19:55:26.944490   48766 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0829 19:55:26.944498   48766 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0829 19:55:26.944503   48766 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0829 19:55:26.944510   48766 command_runner.go:130] > #
	I0829 19:55:26.944516   48766 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0829 19:55:26.944524   48766 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0829 19:55:26.944530   48766 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0829 19:55:26.944537   48766 command_runner.go:130] > #      only add mounts it finds in this file.
	I0829 19:55:26.944540   48766 command_runner.go:130] > #
	I0829 19:55:26.944544   48766 command_runner.go:130] > # default_mounts_file = ""
	I0829 19:55:26.944548   48766 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0829 19:55:26.944557   48766 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0829 19:55:26.944561   48766 command_runner.go:130] > pids_limit = 1024
	I0829 19:55:26.944566   48766 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0829 19:55:26.944577   48766 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0829 19:55:26.944585   48766 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0829 19:55:26.944592   48766 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0829 19:55:26.944610   48766 command_runner.go:130] > # log_size_max = -1
	I0829 19:55:26.944621   48766 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0829 19:55:26.944626   48766 command_runner.go:130] > # log_to_journald = false
	I0829 19:55:26.944631   48766 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0829 19:55:26.944639   48766 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0829 19:55:26.944644   48766 command_runner.go:130] > # Path to directory for container attach sockets.
	I0829 19:55:26.944648   48766 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0829 19:55:26.944653   48766 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0829 19:55:26.944659   48766 command_runner.go:130] > # bind_mount_prefix = ""
	I0829 19:55:26.944664   48766 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0829 19:55:26.944668   48766 command_runner.go:130] > # read_only = false
	I0829 19:55:26.944674   48766 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0829 19:55:26.944681   48766 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0829 19:55:26.944686   48766 command_runner.go:130] > # live configuration reload.
	I0829 19:55:26.944690   48766 command_runner.go:130] > # log_level = "info"
	I0829 19:55:26.944695   48766 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0829 19:55:26.944702   48766 command_runner.go:130] > # This option supports live configuration reload.
	I0829 19:55:26.944706   48766 command_runner.go:130] > # log_filter = ""
	I0829 19:55:26.944714   48766 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0829 19:55:26.944722   48766 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0829 19:55:26.944727   48766 command_runner.go:130] > # separated by comma.
	I0829 19:55:26.944735   48766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0829 19:55:26.944742   48766 command_runner.go:130] > # uid_mappings = ""
	I0829 19:55:26.944748   48766 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0829 19:55:26.944754   48766 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0829 19:55:26.944760   48766 command_runner.go:130] > # separated by comma.
	I0829 19:55:26.944768   48766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0829 19:55:26.944774   48766 command_runner.go:130] > # gid_mappings = ""
	I0829 19:55:26.944780   48766 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0829 19:55:26.944786   48766 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0829 19:55:26.944792   48766 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0829 19:55:26.944802   48766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0829 19:55:26.944807   48766 command_runner.go:130] > # minimum_mappable_uid = -1
	I0829 19:55:26.944813   48766 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0829 19:55:26.944821   48766 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0829 19:55:26.944828   48766 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0829 19:55:26.944838   48766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0829 19:55:26.944842   48766 command_runner.go:130] > # minimum_mappable_gid = -1
	I0829 19:55:26.944847   48766 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0829 19:55:26.944855   48766 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0829 19:55:26.944861   48766 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0829 19:55:26.944867   48766 command_runner.go:130] > # ctr_stop_timeout = 30
	I0829 19:55:26.944872   48766 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0829 19:55:26.944880   48766 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0829 19:55:26.944885   48766 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0829 19:55:26.944890   48766 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0829 19:55:26.944894   48766 command_runner.go:130] > drop_infra_ctr = false
	I0829 19:55:26.944901   48766 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0829 19:55:26.944906   48766 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0829 19:55:26.944915   48766 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0829 19:55:26.944920   48766 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0829 19:55:26.944929   48766 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0829 19:55:26.944934   48766 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0829 19:55:26.944942   48766 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0829 19:55:26.944947   48766 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0829 19:55:26.944952   48766 command_runner.go:130] > # shared_cpuset = ""
	I0829 19:55:26.944958   48766 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0829 19:55:26.944965   48766 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0829 19:55:26.944970   48766 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0829 19:55:26.944979   48766 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0829 19:55:26.944984   48766 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0829 19:55:26.944989   48766 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0829 19:55:26.944997   48766 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0829 19:55:26.945001   48766 command_runner.go:130] > # enable_criu_support = false
	I0829 19:55:26.945008   48766 command_runner.go:130] > # Enable/disable the generation of the container,
	I0829 19:55:26.945016   48766 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0829 19:55:26.945022   48766 command_runner.go:130] > # enable_pod_events = false
	I0829 19:55:26.945027   48766 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0829 19:55:26.945033   48766 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0829 19:55:26.945038   48766 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0829 19:55:26.945044   48766 command_runner.go:130] > # default_runtime = "runc"
	I0829 19:55:26.945049   48766 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0829 19:55:26.945056   48766 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0829 19:55:26.945066   48766 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0829 19:55:26.945074   48766 command_runner.go:130] > # creation as a file is not desired either.
	I0829 19:55:26.945082   48766 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0829 19:55:26.945089   48766 command_runner.go:130] > # the hostname is being managed dynamically.
	I0829 19:55:26.945094   48766 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0829 19:55:26.945099   48766 command_runner.go:130] > # ]
	I0829 19:55:26.945106   48766 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0829 19:55:26.945113   48766 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0829 19:55:26.945119   48766 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0829 19:55:26.945126   48766 command_runner.go:130] > # Each entry in the table should follow the format:
	I0829 19:55:26.945129   48766 command_runner.go:130] > #
	I0829 19:55:26.945133   48766 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0829 19:55:26.945137   48766 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0829 19:55:26.945160   48766 command_runner.go:130] > # runtime_type = "oci"
	I0829 19:55:26.945171   48766 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0829 19:55:26.945181   48766 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0829 19:55:26.945190   48766 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0829 19:55:26.945198   48766 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0829 19:55:26.945206   48766 command_runner.go:130] > # monitor_env = []
	I0829 19:55:26.945218   48766 command_runner.go:130] > # privileged_without_host_devices = false
	I0829 19:55:26.945227   48766 command_runner.go:130] > # allowed_annotations = []
	I0829 19:55:26.945236   48766 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0829 19:55:26.945246   48766 command_runner.go:130] > # Where:
	I0829 19:55:26.945253   48766 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0829 19:55:26.945262   48766 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0829 19:55:26.945269   48766 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0829 19:55:26.945277   48766 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0829 19:55:26.945281   48766 command_runner.go:130] > #   in $PATH.
	I0829 19:55:26.945289   48766 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0829 19:55:26.945294   48766 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0829 19:55:26.945302   48766 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0829 19:55:26.945306   48766 command_runner.go:130] > #   state.
	I0829 19:55:26.945312   48766 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0829 19:55:26.945320   48766 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0829 19:55:26.945326   48766 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0829 19:55:26.945334   48766 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0829 19:55:26.945340   48766 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0829 19:55:26.945348   48766 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0829 19:55:26.945353   48766 command_runner.go:130] > #   The currently recognized values are:
	I0829 19:55:26.945359   48766 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0829 19:55:26.945366   48766 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0829 19:55:26.945374   48766 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0829 19:55:26.945380   48766 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0829 19:55:26.945391   48766 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0829 19:55:26.945399   48766 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0829 19:55:26.945406   48766 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0829 19:55:26.945414   48766 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0829 19:55:26.945420   48766 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0829 19:55:26.945426   48766 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0829 19:55:26.945435   48766 command_runner.go:130] > #   deprecated option "conmon".
	I0829 19:55:26.945442   48766 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0829 19:55:26.945449   48766 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0829 19:55:26.945455   48766 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0829 19:55:26.945462   48766 command_runner.go:130] > #   should be moved to the container's cgroup
	I0829 19:55:26.945469   48766 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0829 19:55:26.945476   48766 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0829 19:55:26.945482   48766 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0829 19:55:26.945492   48766 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0829 19:55:26.945497   48766 command_runner.go:130] > #
	I0829 19:55:26.945502   48766 command_runner.go:130] > # Using the seccomp notifier feature:
	I0829 19:55:26.945507   48766 command_runner.go:130] > #
	I0829 19:55:26.945513   48766 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0829 19:55:26.945523   48766 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0829 19:55:26.945527   48766 command_runner.go:130] > #
	I0829 19:55:26.945533   48766 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0829 19:55:26.945543   48766 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0829 19:55:26.945546   48766 command_runner.go:130] > #
	I0829 19:55:26.945554   48766 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0829 19:55:26.945558   48766 command_runner.go:130] > # feature.
	I0829 19:55:26.945561   48766 command_runner.go:130] > #
	I0829 19:55:26.945566   48766 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0829 19:55:26.945578   48766 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0829 19:55:26.945587   48766 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0829 19:55:26.945593   48766 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0829 19:55:26.945601   48766 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0829 19:55:26.945604   48766 command_runner.go:130] > #
	I0829 19:55:26.945611   48766 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0829 19:55:26.945619   48766 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0829 19:55:26.945622   48766 command_runner.go:130] > #
	I0829 19:55:26.945628   48766 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0829 19:55:26.945636   48766 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0829 19:55:26.945639   48766 command_runner.go:130] > #
	I0829 19:55:26.945645   48766 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0829 19:55:26.945653   48766 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0829 19:55:26.945656   48766 command_runner.go:130] > # limitation.
	I0829 19:55:26.945664   48766 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0829 19:55:26.945669   48766 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0829 19:55:26.945673   48766 command_runner.go:130] > runtime_type = "oci"
	I0829 19:55:26.945678   48766 command_runner.go:130] > runtime_root = "/run/runc"
	I0829 19:55:26.945684   48766 command_runner.go:130] > runtime_config_path = ""
	I0829 19:55:26.945689   48766 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0829 19:55:26.945696   48766 command_runner.go:130] > monitor_cgroup = "pod"
	I0829 19:55:26.945699   48766 command_runner.go:130] > monitor_exec_cgroup = ""
	I0829 19:55:26.945705   48766 command_runner.go:130] > monitor_env = [
	I0829 19:55:26.945713   48766 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0829 19:55:26.945717   48766 command_runner.go:130] > ]
	I0829 19:55:26.945721   48766 command_runner.go:130] > privileged_without_host_devices = false
	I0829 19:55:26.945729   48766 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0829 19:55:26.945734   48766 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0829 19:55:26.945743   48766 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0829 19:55:26.945750   48766 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0829 19:55:26.945759   48766 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0829 19:55:26.945765   48766 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0829 19:55:26.945774   48766 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0829 19:55:26.945783   48766 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0829 19:55:26.945788   48766 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0829 19:55:26.945795   48766 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0829 19:55:26.945798   48766 command_runner.go:130] > # Example:
	I0829 19:55:26.945803   48766 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0829 19:55:26.945808   48766 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0829 19:55:26.945812   48766 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0829 19:55:26.945817   48766 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0829 19:55:26.945820   48766 command_runner.go:130] > # cpuset = 0
	I0829 19:55:26.945824   48766 command_runner.go:130] > # cpushares = "0-1"
	I0829 19:55:26.945828   48766 command_runner.go:130] > # Where:
	I0829 19:55:26.945832   48766 command_runner.go:130] > # The workload name is workload-type.
	I0829 19:55:26.945838   48766 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0829 19:55:26.945843   48766 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0829 19:55:26.945848   48766 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0829 19:55:26.945855   48766 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0829 19:55:26.945861   48766 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0829 19:55:26.945865   48766 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0829 19:55:26.945871   48766 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0829 19:55:26.945875   48766 command_runner.go:130] > # Default value is set to true
	I0829 19:55:26.945879   48766 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0829 19:55:26.945884   48766 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0829 19:55:26.945889   48766 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0829 19:55:26.945893   48766 command_runner.go:130] > # Default value is set to 'false'
	I0829 19:55:26.945897   48766 command_runner.go:130] > # disable_hostport_mapping = false
	I0829 19:55:26.945903   48766 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0829 19:55:26.945906   48766 command_runner.go:130] > #
	I0829 19:55:26.945912   48766 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0829 19:55:26.945918   48766 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0829 19:55:26.945924   48766 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0829 19:55:26.945929   48766 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0829 19:55:26.945934   48766 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0829 19:55:26.945938   48766 command_runner.go:130] > [crio.image]
	I0829 19:55:26.945943   48766 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0829 19:55:26.945947   48766 command_runner.go:130] > # default_transport = "docker://"
	I0829 19:55:26.945952   48766 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0829 19:55:26.945958   48766 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0829 19:55:26.945964   48766 command_runner.go:130] > # global_auth_file = ""
	I0829 19:55:26.945969   48766 command_runner.go:130] > # The image used to instantiate infra containers.
	I0829 19:55:26.945973   48766 command_runner.go:130] > # This option supports live configuration reload.
	I0829 19:55:26.945977   48766 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0829 19:55:26.945983   48766 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0829 19:55:26.945988   48766 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0829 19:55:26.945993   48766 command_runner.go:130] > # This option supports live configuration reload.
	I0829 19:55:26.945997   48766 command_runner.go:130] > # pause_image_auth_file = ""
	I0829 19:55:26.946002   48766 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0829 19:55:26.946008   48766 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0829 19:55:26.946013   48766 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0829 19:55:26.946019   48766 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0829 19:55:26.946022   48766 command_runner.go:130] > # pause_command = "/pause"
	I0829 19:55:26.946027   48766 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0829 19:55:26.946033   48766 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0829 19:55:26.946039   48766 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0829 19:55:26.946046   48766 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0829 19:55:26.946052   48766 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0829 19:55:26.946058   48766 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0829 19:55:26.946065   48766 command_runner.go:130] > # pinned_images = [
	I0829 19:55:26.946068   48766 command_runner.go:130] > # ]
	I0829 19:55:26.946074   48766 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0829 19:55:26.946080   48766 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0829 19:55:26.946086   48766 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0829 19:55:26.946095   48766 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0829 19:55:26.946100   48766 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0829 19:55:26.946104   48766 command_runner.go:130] > # signature_policy = ""
	I0829 19:55:26.946109   48766 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0829 19:55:26.946119   48766 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0829 19:55:26.946125   48766 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0829 19:55:26.946133   48766 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0829 19:55:26.946139   48766 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0829 19:55:26.946146   48766 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0829 19:55:26.946154   48766 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0829 19:55:26.946167   48766 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0829 19:55:26.946176   48766 command_runner.go:130] > # changing them here.
	I0829 19:55:26.946183   48766 command_runner.go:130] > # insecure_registries = [
	I0829 19:55:26.946188   48766 command_runner.go:130] > # ]
	I0829 19:55:26.946199   48766 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0829 19:55:26.946209   48766 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0829 19:55:26.946216   48766 command_runner.go:130] > # image_volumes = "mkdir"
	I0829 19:55:26.946226   48766 command_runner.go:130] > # Temporary directory to use for storing big files
	I0829 19:55:26.946234   48766 command_runner.go:130] > # big_files_temporary_dir = ""
	I0829 19:55:26.946246   48766 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0829 19:55:26.946253   48766 command_runner.go:130] > # CNI plugins.
	I0829 19:55:26.946257   48766 command_runner.go:130] > [crio.network]
	I0829 19:55:26.946265   48766 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0829 19:55:26.946271   48766 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0829 19:55:26.946277   48766 command_runner.go:130] > # cni_default_network = ""
	I0829 19:55:26.946282   48766 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0829 19:55:26.946289   48766 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0829 19:55:26.946294   48766 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0829 19:55:26.946300   48766 command_runner.go:130] > # plugin_dirs = [
	I0829 19:55:26.946305   48766 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0829 19:55:26.946310   48766 command_runner.go:130] > # ]
	I0829 19:55:26.946315   48766 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0829 19:55:26.946321   48766 command_runner.go:130] > [crio.metrics]
	I0829 19:55:26.946326   48766 command_runner.go:130] > # Globally enable or disable metrics support.
	I0829 19:55:26.946330   48766 command_runner.go:130] > enable_metrics = true
	I0829 19:55:26.946335   48766 command_runner.go:130] > # Specify enabled metrics collectors.
	I0829 19:55:26.946340   48766 command_runner.go:130] > # Per default all metrics are enabled.
	I0829 19:55:26.946348   48766 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0829 19:55:26.946354   48766 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0829 19:55:26.946362   48766 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0829 19:55:26.946366   48766 command_runner.go:130] > # metrics_collectors = [
	I0829 19:55:26.946373   48766 command_runner.go:130] > # 	"operations",
	I0829 19:55:26.946377   48766 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0829 19:55:26.946381   48766 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0829 19:55:26.946385   48766 command_runner.go:130] > # 	"operations_errors",
	I0829 19:55:26.946389   48766 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0829 19:55:26.946394   48766 command_runner.go:130] > # 	"image_pulls_by_name",
	I0829 19:55:26.946398   48766 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0829 19:55:26.946404   48766 command_runner.go:130] > # 	"image_pulls_failures",
	I0829 19:55:26.946409   48766 command_runner.go:130] > # 	"image_pulls_successes",
	I0829 19:55:26.946415   48766 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0829 19:55:26.946419   48766 command_runner.go:130] > # 	"image_layer_reuse",
	I0829 19:55:26.946424   48766 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0829 19:55:26.946429   48766 command_runner.go:130] > # 	"containers_oom_total",
	I0829 19:55:26.946435   48766 command_runner.go:130] > # 	"containers_oom",
	I0829 19:55:26.946441   48766 command_runner.go:130] > # 	"processes_defunct",
	I0829 19:55:26.946445   48766 command_runner.go:130] > # 	"operations_total",
	I0829 19:55:26.946449   48766 command_runner.go:130] > # 	"operations_latency_seconds",
	I0829 19:55:26.946454   48766 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0829 19:55:26.946460   48766 command_runner.go:130] > # 	"operations_errors_total",
	I0829 19:55:26.946463   48766 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0829 19:55:26.946468   48766 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0829 19:55:26.946472   48766 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0829 19:55:26.946478   48766 command_runner.go:130] > # 	"image_pulls_success_total",
	I0829 19:55:26.946482   48766 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0829 19:55:26.946486   48766 command_runner.go:130] > # 	"containers_oom_count_total",
	I0829 19:55:26.946491   48766 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0829 19:55:26.946497   48766 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0829 19:55:26.946501   48766 command_runner.go:130] > # ]
	I0829 19:55:26.946506   48766 command_runner.go:130] > # The port on which the metrics server will listen.
	I0829 19:55:26.946512   48766 command_runner.go:130] > # metrics_port = 9090
	I0829 19:55:26.946516   48766 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0829 19:55:26.946520   48766 command_runner.go:130] > # metrics_socket = ""
	I0829 19:55:26.946525   48766 command_runner.go:130] > # The certificate for the secure metrics server.
	I0829 19:55:26.946546   48766 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0829 19:55:26.946557   48766 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0829 19:55:26.946563   48766 command_runner.go:130] > # certificate on any modification event.
	I0829 19:55:26.946567   48766 command_runner.go:130] > # metrics_cert = ""
	I0829 19:55:26.946576   48766 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0829 19:55:26.946581   48766 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0829 19:55:26.946586   48766 command_runner.go:130] > # metrics_key = ""
	I0829 19:55:26.946592   48766 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0829 19:55:26.946598   48766 command_runner.go:130] > [crio.tracing]
	I0829 19:55:26.946603   48766 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0829 19:55:26.946610   48766 command_runner.go:130] > # enable_tracing = false
	I0829 19:55:26.946616   48766 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0829 19:55:26.946623   48766 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0829 19:55:26.946630   48766 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0829 19:55:26.946637   48766 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0829 19:55:26.946641   48766 command_runner.go:130] > # CRI-O NRI configuration.
	I0829 19:55:26.946644   48766 command_runner.go:130] > [crio.nri]
	I0829 19:55:26.946648   48766 command_runner.go:130] > # Globally enable or disable NRI.
	I0829 19:55:26.946656   48766 command_runner.go:130] > # enable_nri = false
	I0829 19:55:26.946662   48766 command_runner.go:130] > # NRI socket to listen on.
	I0829 19:55:26.946667   48766 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0829 19:55:26.946674   48766 command_runner.go:130] > # NRI plugin directory to use.
	I0829 19:55:26.946678   48766 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0829 19:55:26.946683   48766 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0829 19:55:26.946690   48766 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0829 19:55:26.946695   48766 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0829 19:55:26.946701   48766 command_runner.go:130] > # nri_disable_connections = false
	I0829 19:55:26.946708   48766 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0829 19:55:26.946715   48766 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0829 19:55:26.946720   48766 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0829 19:55:26.946725   48766 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0829 19:55:26.946730   48766 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0829 19:55:26.946734   48766 command_runner.go:130] > [crio.stats]
	I0829 19:55:26.946740   48766 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0829 19:55:26.946745   48766 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0829 19:55:26.946751   48766 command_runner.go:130] > # stats_collection_period = 0
	I0829 19:55:26.946864   48766 cni.go:84] Creating CNI manager for ""
	I0829 19:55:26.946874   48766 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0829 19:55:26.946885   48766 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:55:26.946904   48766 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.245 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-197790 NodeName:multinode-197790 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:55:26.947029   48766 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-197790"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:55:26.947085   48766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:55:26.958295   48766 command_runner.go:130] > kubeadm
	I0829 19:55:26.958316   48766 command_runner.go:130] > kubectl
	I0829 19:55:26.958322   48766 command_runner.go:130] > kubelet
	I0829 19:55:26.958427   48766 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:55:26.958485   48766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:55:26.968526   48766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0829 19:55:26.984977   48766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:55:27.001098   48766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0829 19:55:27.017278   48766 ssh_runner.go:195] Run: grep 192.168.39.245	control-plane.minikube.internal$ /etc/hosts
	I0829 19:55:27.020920   48766 command_runner.go:130] > 192.168.39.245	control-plane.minikube.internal
	I0829 19:55:27.020975   48766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:55:27.156038   48766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:55:27.171610   48766 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/multinode-197790 for IP: 192.168.39.245
	I0829 19:55:27.171632   48766 certs.go:194] generating shared ca certs ...
	I0829 19:55:27.171651   48766 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:55:27.171844   48766 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 19:55:27.171900   48766 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 19:55:27.171913   48766 certs.go:256] generating profile certs ...
	I0829 19:55:27.172006   48766 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/multinode-197790/client.key
	I0829 19:55:27.172086   48766 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/multinode-197790/apiserver.key.c28e1b40
	I0829 19:55:27.172129   48766 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/multinode-197790/proxy-client.key
	I0829 19:55:27.172140   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0829 19:55:27.172153   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0829 19:55:27.172170   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0829 19:55:27.172199   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0829 19:55:27.172218   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/multinode-197790/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0829 19:55:27.172236   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/multinode-197790/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0829 19:55:27.172254   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/multinode-197790/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0829 19:55:27.172270   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/multinode-197790/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0829 19:55:27.172334   48766 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 19:55:27.172372   48766 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 19:55:27.172383   48766 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 19:55:27.172411   48766 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 19:55:27.172435   48766 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:55:27.172459   48766 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 19:55:27.172497   48766 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 19:55:27.172525   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> /usr/share/ca-certificates/183612.pem
	I0829 19:55:27.172538   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:55:27.172550   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem -> /usr/share/ca-certificates/18361.pem
	I0829 19:55:27.173101   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:55:27.196724   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 19:55:27.219538   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:55:27.242466   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:55:27.266438   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/multinode-197790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0829 19:55:27.290051   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/multinode-197790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:55:27.312847   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/multinode-197790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:55:27.335624   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/multinode-197790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:55:27.360765   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 19:55:27.383755   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:55:27.409061   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 19:55:27.432699   48766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:55:27.449324   48766 ssh_runner.go:195] Run: openssl version
	I0829 19:55:27.455310   48766 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0829 19:55:27.455407   48766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 19:55:27.466161   48766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 19:55:27.470528   48766 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 19:55:27.470559   48766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 19:55:27.470608   48766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 19:55:27.475998   48766 command_runner.go:130] > 3ec20f2e
	I0829 19:55:27.476059   48766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:55:27.485202   48766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:55:27.495721   48766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:55:27.499799   48766 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:55:27.499926   48766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:55:27.499961   48766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:55:27.505205   48766 command_runner.go:130] > b5213941
	I0829 19:55:27.505473   48766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:55:27.514997   48766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 19:55:27.536644   48766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 19:55:27.546030   48766 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 19:55:27.553666   48766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 19:55:27.553726   48766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 19:55:27.571612   48766 command_runner.go:130] > 51391683
	I0829 19:55:27.571797   48766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 19:55:27.587671   48766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:55:27.594459   48766 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:55:27.594480   48766 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0829 19:55:27.594486   48766 command_runner.go:130] > Device: 253,1	Inode: 2103318     Links: 1
	I0829 19:55:27.594493   48766 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0829 19:55:27.594499   48766 command_runner.go:130] > Access: 2024-08-29 19:48:48.214499508 +0000
	I0829 19:55:27.594503   48766 command_runner.go:130] > Modify: 2024-08-29 19:48:48.214499508 +0000
	I0829 19:55:27.594508   48766 command_runner.go:130] > Change: 2024-08-29 19:48:48.214499508 +0000
	I0829 19:55:27.594512   48766 command_runner.go:130] >  Birth: 2024-08-29 19:48:48.214499508 +0000
	I0829 19:55:27.596111   48766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:55:27.604636   48766 command_runner.go:130] > Certificate will not expire
	I0829 19:55:27.605101   48766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:55:27.613528   48766 command_runner.go:130] > Certificate will not expire
	I0829 19:55:27.613695   48766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:55:27.619704   48766 command_runner.go:130] > Certificate will not expire
	I0829 19:55:27.619847   48766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:55:27.630126   48766 command_runner.go:130] > Certificate will not expire
	I0829 19:55:27.630326   48766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:55:27.637169   48766 command_runner.go:130] > Certificate will not expire
	I0829 19:55:27.637373   48766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:55:27.644412   48766 command_runner.go:130] > Certificate will not expire
	I0829 19:55:27.644477   48766 kubeadm.go:392] StartCluster: {Name:multinode-197790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-197790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.247 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.131 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:55:27.644592   48766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:55:27.644641   48766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:55:27.728001   48766 command_runner.go:130] > 6416a8f78390db4ac05922480ada0abfe0d1dbe01b32e2153f4cdef264b67a68
	I0829 19:55:27.728023   48766 command_runner.go:130] > fc1b7bdfd5ba3dea8deda18eb3dcf557e0c812d2070d008a379f28ded006055b
	I0829 19:55:27.728032   48766 command_runner.go:130] > 1a811445cc83f2e8b972591baff6b04e9f09f97ba210724a9d8f22c91989fa5d
	I0829 19:55:27.728055   48766 command_runner.go:130] > e676040af61f22a0c5fce59e3da10bb089a18f9ac5e399ff933823362e94206c
	I0829 19:55:27.728063   48766 command_runner.go:130] > 286aa4b9e2fe4ed29522efd5138da93f0d64acbe34bd5ee5d2b1cc688451cca2
	I0829 19:55:27.728072   48766 command_runner.go:130] > 3fa8c0876c238c0b2141bf4cf896f8c40699078db6f5e3d55a209627ea097d1e
	I0829 19:55:27.728080   48766 command_runner.go:130] > 4e91e07c89ec5477b7fdae0dc2885abb50515ad9a90d475f0a4a97b15ec8d337
	I0829 19:55:27.728094   48766 command_runner.go:130] > 8cce534dca4077aaa0eba62fee7ef0b41f7528d06ebd1c677a6741e47b378c6d
	I0829 19:55:27.728122   48766 cri.go:89] found id: "6416a8f78390db4ac05922480ada0abfe0d1dbe01b32e2153f4cdef264b67a68"
	I0829 19:55:27.728134   48766 cri.go:89] found id: "fc1b7bdfd5ba3dea8deda18eb3dcf557e0c812d2070d008a379f28ded006055b"
	I0829 19:55:27.728139   48766 cri.go:89] found id: "1a811445cc83f2e8b972591baff6b04e9f09f97ba210724a9d8f22c91989fa5d"
	I0829 19:55:27.728144   48766 cri.go:89] found id: "e676040af61f22a0c5fce59e3da10bb089a18f9ac5e399ff933823362e94206c"
	I0829 19:55:27.728149   48766 cri.go:89] found id: "286aa4b9e2fe4ed29522efd5138da93f0d64acbe34bd5ee5d2b1cc688451cca2"
	I0829 19:55:27.728156   48766 cri.go:89] found id: "3fa8c0876c238c0b2141bf4cf896f8c40699078db6f5e3d55a209627ea097d1e"
	I0829 19:55:27.728160   48766 cri.go:89] found id: "4e91e07c89ec5477b7fdae0dc2885abb50515ad9a90d475f0a4a97b15ec8d337"
	I0829 19:55:27.728164   48766 cri.go:89] found id: "8cce534dca4077aaa0eba62fee7ef0b41f7528d06ebd1c677a6741e47b378c6d"
	I0829 19:55:27.728169   48766 cri.go:89] found id: ""
	I0829 19:55:27.728207   48766 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 29 19:57:14 multinode-197790 crio[2740]: time="2024-08-29 19:57:14.876265772Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961434876242507,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=77df1318-3aa1-4515-9ab0-1724be24cccd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:57:14 multinode-197790 crio[2740]: time="2024-08-29 19:57:14.877382414Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=23a25eb9-f385-4202-a46e-5cf663073cb2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:57:14 multinode-197790 crio[2740]: time="2024-08-29 19:57:14.877454237Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=23a25eb9-f385-4202-a46e-5cf663073cb2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:57:14 multinode-197790 crio[2740]: time="2024-08-29 19:57:14.877984116Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:975d4818818c9de94a7844c12e15f30f5b49c551192ef5e12e28c9156df31b85,PodSandboxId:1180a709ee431eebd683e316be265db41b8e84c1903bf9ad9c4e5516298b45ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724961368242157908,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zglxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf1e6921-597a-4b85-873d-fa6578ac26a7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2286a2b13ba4fc4fb058428b09c9dbca4b3c192c351bfa2a21baeed268364dbb,PodSandboxId:4043b3181266550e7053efa329d8e3f1edd1aae9aa02b04d571da5903cb13699,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724961334681439917,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nbcg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b6fbd7-8314-4621-97b9-33e55ba5797f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9253d9c8d0ea7633c2f3277a98a78bf1fe87a553b56a63453809b5eb587a4af0,PodSandboxId:b360bedefdc4253fee7470b465085edbc8f8fe68b90d4191f4671cf8bb3c0c4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724961334618597859,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h6qz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d6839e-102f-4f62-a6bb-2973abdbfc39,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea9d28238a802bd92017d51fb4a7d86879632675ce30c2f03ef1f53599300697,PodSandboxId:5a345d3556a28af490ec205934f27dadfe48e56251400084e176b19f034bc1c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724961334554823068,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61145331-663c-404f-9c46-3eb3bc0cb49a,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f9d9f10866771e532eeb4564f1df6410e9ed5256f0527579efd3c90184a0824,PodSandboxId:2f04c9e7f0430c4b4f7bd123485344242d9a9fc2a0abba9595941976acbab6b1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724961334525986710,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xdb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd7f904d-28eb-4936-b01c-0f50d3d9c122,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e97593be7b5873f114cb6dd2ad4caf2b3c2260b0b81177da3b4994bb0e3bd0,PodSandboxId:fd424287e4cdf4554764a657b45f123a87ffe88b5e9e55b8b19c10e6ac55a86b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724961330776943964,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3047d2e2afb241cbe7dea32bac3d4e39,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1ca0678b248f6b562721904843ac20264ab8cd7b69b73c49d244b14fc3cdd2a,PodSandboxId:0bac65f2da1ad52d116318976f0b2b2730907a42b4d00366b03f19eefad2b17f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724961330747042460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d42c7ee029acec4350042255b5d743,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9550962f5646b8b1fbccdaafc3c325a4cea964f293e546e4400ed68b2d2c043,PodSandboxId:1cdef8ba76f16bbd7d19e5484b2ef8f8424e1ba38df69a210503054574bc2c3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724961330690182534,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 880f3a938902567970348e90e40686dd,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a456c840add35f908f85941763797193b2e5ee9b05c1e8c1a705ea1a443aa8f,PodSandboxId:27cc195a9cf0b2cd52c45e2f61eb7c24fd057f52101c135feebf4f10abfd9519,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724961330492106092,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f51f296d9b2ed8e98f1266232d7508c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bcbe264234acc65c19b7037165725259fa40f8dff1a5ae8b24e2fc8e3f6adc3,PodSandboxId:27cc195a9cf0b2cd52c45e2f61eb7c24fd057f52101c135feebf4f10abfd9519,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724961327640225575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f51f296d9b2ed8e98f1266232d7508c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a80093540b50a0d494482b179c42034ea5e85fd46b8832d41193499b85012a8,PodSandboxId:61d2696b65c89b335d77db0b7d0e6575a22f33365223f0a4c757ade2400dd3c7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724961011618006152,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zglxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf1e6921-597a-4b85-873d-fa6578ac26a7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6416a8f78390db4ac05922480ada0abfe0d1dbe01b32e2153f4cdef264b67a68,PodSandboxId:086d9afeac18d2806c03dd87f24bd2b5f41694e9d4c2e6fb392690ec54b30aa3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724960957883799326,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h6qz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d6839e-102f-4f62-a6bb-2973abdbfc39,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc1b7bdfd5ba3dea8deda18eb3dcf557e0c812d2070d008a379f28ded006055b,PodSandboxId:c4e2accec55d8288ed373c9ab8f492bf1229b5081140ab181efeef9a03b4e3ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724960957848074658,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
61145331-663c-404f-9c46-3eb3bc0cb49a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a811445cc83f2e8b972591baff6b04e9f09f97ba210724a9d8f22c91989fa5d,PodSandboxId:3cbc16c4c4a2b0b42b8700850cec14813eb874de5661b3bb63fc77f2e531ab75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724960945841971568,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nbcg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
0b6fbd7-8314-4621-97b9-33e55ba5797f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e676040af61f22a0c5fce59e3da10bb089a18f9ac5e399ff933823362e94206c,PodSandboxId:ee18d8fb6c071df6376f1a50bfd9ea969a44fe1fb25db3297b0af1f7d4c9aac5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724960943061173837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xdb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd7f904d-28eb-4936-b01c-0f50d3d9c12
2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:286aa4b9e2fe4ed29522efd5138da93f0d64acbe34bd5ee5d2b1cc688451cca2,PodSandboxId:3477e9724991a2f13d4d8f17c591e732e1d458f3ead28d3135abbfd25e33c6f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724960932246677560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d42c7ee029acec4350
042255b5d743,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e91e07c89ec5477b7fdae0dc2885abb50515ad9a90d475f0a4a97b15ec8d337,PodSandboxId:8ae6b3e132ab34030fa07e45d84cf44a206e3f7b3135a92ee6177fd643ed499e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724960932204997681,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3047d2e2afb241cbe7dea32bac3d4e39,},
Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cce534dca4077aaa0eba62fee7ef0b41f7528d06ebd1c677a6741e47b378c6d,PodSandboxId:f63365809437843caca233a45934c73628053b984d171c4bc7fd9ae43363a4bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960932121609997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 880f3a938902567970348e90e40686dd,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=23a25eb9-f385-4202-a46e-5cf663073cb2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:57:14 multinode-197790 crio[2740]: time="2024-08-29 19:57:14.920375608Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e86d78ad-625b-4822-8615-61e096b60801 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:57:14 multinode-197790 crio[2740]: time="2024-08-29 19:57:14.920471944Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e86d78ad-625b-4822-8615-61e096b60801 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:57:14 multinode-197790 crio[2740]: time="2024-08-29 19:57:14.921793238Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3e3e1ef1-7d72-4334-a64b-85fc30b6bb01 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:57:14 multinode-197790 crio[2740]: time="2024-08-29 19:57:14.922258581Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961434922236831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e3e1ef1-7d72-4334-a64b-85fc30b6bb01 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:57:14 multinode-197790 crio[2740]: time="2024-08-29 19:57:14.923042159Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7636f955-d8a1-4ae5-84a3-98f47f26d1a5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:57:14 multinode-197790 crio[2740]: time="2024-08-29 19:57:14.923096998Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7636f955-d8a1-4ae5-84a3-98f47f26d1a5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:57:14 multinode-197790 crio[2740]: time="2024-08-29 19:57:14.923458488Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:975d4818818c9de94a7844c12e15f30f5b49c551192ef5e12e28c9156df31b85,PodSandboxId:1180a709ee431eebd683e316be265db41b8e84c1903bf9ad9c4e5516298b45ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724961368242157908,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zglxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf1e6921-597a-4b85-873d-fa6578ac26a7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2286a2b13ba4fc4fb058428b09c9dbca4b3c192c351bfa2a21baeed268364dbb,PodSandboxId:4043b3181266550e7053efa329d8e3f1edd1aae9aa02b04d571da5903cb13699,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724961334681439917,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nbcg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b6fbd7-8314-4621-97b9-33e55ba5797f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9253d9c8d0ea7633c2f3277a98a78bf1fe87a553b56a63453809b5eb587a4af0,PodSandboxId:b360bedefdc4253fee7470b465085edbc8f8fe68b90d4191f4671cf8bb3c0c4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724961334618597859,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h6qz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d6839e-102f-4f62-a6bb-2973abdbfc39,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea9d28238a802bd92017d51fb4a7d86879632675ce30c2f03ef1f53599300697,PodSandboxId:5a345d3556a28af490ec205934f27dadfe48e56251400084e176b19f034bc1c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724961334554823068,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61145331-663c-404f-9c46-3eb3bc0cb49a,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f9d9f10866771e532eeb4564f1df6410e9ed5256f0527579efd3c90184a0824,PodSandboxId:2f04c9e7f0430c4b4f7bd123485344242d9a9fc2a0abba9595941976acbab6b1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724961334525986710,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xdb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd7f904d-28eb-4936-b01c-0f50d3d9c122,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e97593be7b5873f114cb6dd2ad4caf2b3c2260b0b81177da3b4994bb0e3bd0,PodSandboxId:fd424287e4cdf4554764a657b45f123a87ffe88b5e9e55b8b19c10e6ac55a86b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724961330776943964,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3047d2e2afb241cbe7dea32bac3d4e39,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1ca0678b248f6b562721904843ac20264ab8cd7b69b73c49d244b14fc3cdd2a,PodSandboxId:0bac65f2da1ad52d116318976f0b2b2730907a42b4d00366b03f19eefad2b17f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724961330747042460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d42c7ee029acec4350042255b5d743,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9550962f5646b8b1fbccdaafc3c325a4cea964f293e546e4400ed68b2d2c043,PodSandboxId:1cdef8ba76f16bbd7d19e5484b2ef8f8424e1ba38df69a210503054574bc2c3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724961330690182534,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 880f3a938902567970348e90e40686dd,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a456c840add35f908f85941763797193b2e5ee9b05c1e8c1a705ea1a443aa8f,PodSandboxId:27cc195a9cf0b2cd52c45e2f61eb7c24fd057f52101c135feebf4f10abfd9519,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724961330492106092,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f51f296d9b2ed8e98f1266232d7508c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bcbe264234acc65c19b7037165725259fa40f8dff1a5ae8b24e2fc8e3f6adc3,PodSandboxId:27cc195a9cf0b2cd52c45e2f61eb7c24fd057f52101c135feebf4f10abfd9519,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724961327640225575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f51f296d9b2ed8e98f1266232d7508c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a80093540b50a0d494482b179c42034ea5e85fd46b8832d41193499b85012a8,PodSandboxId:61d2696b65c89b335d77db0b7d0e6575a22f33365223f0a4c757ade2400dd3c7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724961011618006152,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zglxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf1e6921-597a-4b85-873d-fa6578ac26a7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6416a8f78390db4ac05922480ada0abfe0d1dbe01b32e2153f4cdef264b67a68,PodSandboxId:086d9afeac18d2806c03dd87f24bd2b5f41694e9d4c2e6fb392690ec54b30aa3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724960957883799326,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h6qz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d6839e-102f-4f62-a6bb-2973abdbfc39,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc1b7bdfd5ba3dea8deda18eb3dcf557e0c812d2070d008a379f28ded006055b,PodSandboxId:c4e2accec55d8288ed373c9ab8f492bf1229b5081140ab181efeef9a03b4e3ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724960957848074658,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
61145331-663c-404f-9c46-3eb3bc0cb49a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a811445cc83f2e8b972591baff6b04e9f09f97ba210724a9d8f22c91989fa5d,PodSandboxId:3cbc16c4c4a2b0b42b8700850cec14813eb874de5661b3bb63fc77f2e531ab75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724960945841971568,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nbcg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
0b6fbd7-8314-4621-97b9-33e55ba5797f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e676040af61f22a0c5fce59e3da10bb089a18f9ac5e399ff933823362e94206c,PodSandboxId:ee18d8fb6c071df6376f1a50bfd9ea969a44fe1fb25db3297b0af1f7d4c9aac5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724960943061173837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xdb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd7f904d-28eb-4936-b01c-0f50d3d9c12
2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:286aa4b9e2fe4ed29522efd5138da93f0d64acbe34bd5ee5d2b1cc688451cca2,PodSandboxId:3477e9724991a2f13d4d8f17c591e732e1d458f3ead28d3135abbfd25e33c6f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724960932246677560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d42c7ee029acec4350
042255b5d743,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e91e07c89ec5477b7fdae0dc2885abb50515ad9a90d475f0a4a97b15ec8d337,PodSandboxId:8ae6b3e132ab34030fa07e45d84cf44a206e3f7b3135a92ee6177fd643ed499e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724960932204997681,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3047d2e2afb241cbe7dea32bac3d4e39,},
Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cce534dca4077aaa0eba62fee7ef0b41f7528d06ebd1c677a6741e47b378c6d,PodSandboxId:f63365809437843caca233a45934c73628053b984d171c4bc7fd9ae43363a4bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960932121609997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 880f3a938902567970348e90e40686dd,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7636f955-d8a1-4ae5-84a3-98f47f26d1a5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:57:14 multinode-197790 crio[2740]: time="2024-08-29 19:57:14.967025495Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8dd67a2c-0f73-46b4-b228-264ff9e8434b name=/runtime.v1.RuntimeService/Version
	Aug 29 19:57:14 multinode-197790 crio[2740]: time="2024-08-29 19:57:14.967093777Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8dd67a2c-0f73-46b4-b228-264ff9e8434b name=/runtime.v1.RuntimeService/Version
	Aug 29 19:57:14 multinode-197790 crio[2740]: time="2024-08-29 19:57:14.968314789Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=05804414-a92e-449b-a628-82d5f3775b87 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:57:14 multinode-197790 crio[2740]: time="2024-08-29 19:57:14.968811853Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961434968786967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=05804414-a92e-449b-a628-82d5f3775b87 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:57:14 multinode-197790 crio[2740]: time="2024-08-29 19:57:14.969445524Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=502014b5-8ad9-45c6-9880-ad13eead1f71 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:57:14 multinode-197790 crio[2740]: time="2024-08-29 19:57:14.969514248Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=502014b5-8ad9-45c6-9880-ad13eead1f71 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:57:14 multinode-197790 crio[2740]: time="2024-08-29 19:57:14.970011533Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:975d4818818c9de94a7844c12e15f30f5b49c551192ef5e12e28c9156df31b85,PodSandboxId:1180a709ee431eebd683e316be265db41b8e84c1903bf9ad9c4e5516298b45ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724961368242157908,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zglxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf1e6921-597a-4b85-873d-fa6578ac26a7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2286a2b13ba4fc4fb058428b09c9dbca4b3c192c351bfa2a21baeed268364dbb,PodSandboxId:4043b3181266550e7053efa329d8e3f1edd1aae9aa02b04d571da5903cb13699,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724961334681439917,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nbcg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b6fbd7-8314-4621-97b9-33e55ba5797f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9253d9c8d0ea7633c2f3277a98a78bf1fe87a553b56a63453809b5eb587a4af0,PodSandboxId:b360bedefdc4253fee7470b465085edbc8f8fe68b90d4191f4671cf8bb3c0c4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724961334618597859,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h6qz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d6839e-102f-4f62-a6bb-2973abdbfc39,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea9d28238a802bd92017d51fb4a7d86879632675ce30c2f03ef1f53599300697,PodSandboxId:5a345d3556a28af490ec205934f27dadfe48e56251400084e176b19f034bc1c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724961334554823068,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61145331-663c-404f-9c46-3eb3bc0cb49a,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f9d9f10866771e532eeb4564f1df6410e9ed5256f0527579efd3c90184a0824,PodSandboxId:2f04c9e7f0430c4b4f7bd123485344242d9a9fc2a0abba9595941976acbab6b1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724961334525986710,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xdb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd7f904d-28eb-4936-b01c-0f50d3d9c122,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e97593be7b5873f114cb6dd2ad4caf2b3c2260b0b81177da3b4994bb0e3bd0,PodSandboxId:fd424287e4cdf4554764a657b45f123a87ffe88b5e9e55b8b19c10e6ac55a86b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724961330776943964,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3047d2e2afb241cbe7dea32bac3d4e39,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1ca0678b248f6b562721904843ac20264ab8cd7b69b73c49d244b14fc3cdd2a,PodSandboxId:0bac65f2da1ad52d116318976f0b2b2730907a42b4d00366b03f19eefad2b17f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724961330747042460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d42c7ee029acec4350042255b5d743,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9550962f5646b8b1fbccdaafc3c325a4cea964f293e546e4400ed68b2d2c043,PodSandboxId:1cdef8ba76f16bbd7d19e5484b2ef8f8424e1ba38df69a210503054574bc2c3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724961330690182534,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 880f3a938902567970348e90e40686dd,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a456c840add35f908f85941763797193b2e5ee9b05c1e8c1a705ea1a443aa8f,PodSandboxId:27cc195a9cf0b2cd52c45e2f61eb7c24fd057f52101c135feebf4f10abfd9519,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724961330492106092,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f51f296d9b2ed8e98f1266232d7508c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bcbe264234acc65c19b7037165725259fa40f8dff1a5ae8b24e2fc8e3f6adc3,PodSandboxId:27cc195a9cf0b2cd52c45e2f61eb7c24fd057f52101c135feebf4f10abfd9519,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724961327640225575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f51f296d9b2ed8e98f1266232d7508c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a80093540b50a0d494482b179c42034ea5e85fd46b8832d41193499b85012a8,PodSandboxId:61d2696b65c89b335d77db0b7d0e6575a22f33365223f0a4c757ade2400dd3c7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724961011618006152,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zglxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf1e6921-597a-4b85-873d-fa6578ac26a7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6416a8f78390db4ac05922480ada0abfe0d1dbe01b32e2153f4cdef264b67a68,PodSandboxId:086d9afeac18d2806c03dd87f24bd2b5f41694e9d4c2e6fb392690ec54b30aa3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724960957883799326,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h6qz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d6839e-102f-4f62-a6bb-2973abdbfc39,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc1b7bdfd5ba3dea8deda18eb3dcf557e0c812d2070d008a379f28ded006055b,PodSandboxId:c4e2accec55d8288ed373c9ab8f492bf1229b5081140ab181efeef9a03b4e3ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724960957848074658,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
61145331-663c-404f-9c46-3eb3bc0cb49a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a811445cc83f2e8b972591baff6b04e9f09f97ba210724a9d8f22c91989fa5d,PodSandboxId:3cbc16c4c4a2b0b42b8700850cec14813eb874de5661b3bb63fc77f2e531ab75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724960945841971568,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nbcg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
0b6fbd7-8314-4621-97b9-33e55ba5797f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e676040af61f22a0c5fce59e3da10bb089a18f9ac5e399ff933823362e94206c,PodSandboxId:ee18d8fb6c071df6376f1a50bfd9ea969a44fe1fb25db3297b0af1f7d4c9aac5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724960943061173837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xdb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd7f904d-28eb-4936-b01c-0f50d3d9c12
2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:286aa4b9e2fe4ed29522efd5138da93f0d64acbe34bd5ee5d2b1cc688451cca2,PodSandboxId:3477e9724991a2f13d4d8f17c591e732e1d458f3ead28d3135abbfd25e33c6f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724960932246677560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d42c7ee029acec4350
042255b5d743,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e91e07c89ec5477b7fdae0dc2885abb50515ad9a90d475f0a4a97b15ec8d337,PodSandboxId:8ae6b3e132ab34030fa07e45d84cf44a206e3f7b3135a92ee6177fd643ed499e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724960932204997681,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3047d2e2afb241cbe7dea32bac3d4e39,},
Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cce534dca4077aaa0eba62fee7ef0b41f7528d06ebd1c677a6741e47b378c6d,PodSandboxId:f63365809437843caca233a45934c73628053b984d171c4bc7fd9ae43363a4bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960932121609997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 880f3a938902567970348e90e40686dd,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=502014b5-8ad9-45c6-9880-ad13eead1f71 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:57:15 multinode-197790 crio[2740]: time="2024-08-29 19:57:15.012500161Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dbe37ddd-611f-4c6f-98eb-0f0781c3da19 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:57:15 multinode-197790 crio[2740]: time="2024-08-29 19:57:15.012569726Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dbe37ddd-611f-4c6f-98eb-0f0781c3da19 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:57:15 multinode-197790 crio[2740]: time="2024-08-29 19:57:15.014099989Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ede868ff-0e2c-4311-873e-2e59706778d0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:57:15 multinode-197790 crio[2740]: time="2024-08-29 19:57:15.014537924Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961435014507108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ede868ff-0e2c-4311-873e-2e59706778d0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:57:15 multinode-197790 crio[2740]: time="2024-08-29 19:57:15.015040542Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=52cfe256-ff4a-47b5-b86e-d3683ba5c1fb name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:57:15 multinode-197790 crio[2740]: time="2024-08-29 19:57:15.015129878Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=52cfe256-ff4a-47b5-b86e-d3683ba5c1fb name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:57:15 multinode-197790 crio[2740]: time="2024-08-29 19:57:15.015488575Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:975d4818818c9de94a7844c12e15f30f5b49c551192ef5e12e28c9156df31b85,PodSandboxId:1180a709ee431eebd683e316be265db41b8e84c1903bf9ad9c4e5516298b45ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724961368242157908,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zglxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf1e6921-597a-4b85-873d-fa6578ac26a7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2286a2b13ba4fc4fb058428b09c9dbca4b3c192c351bfa2a21baeed268364dbb,PodSandboxId:4043b3181266550e7053efa329d8e3f1edd1aae9aa02b04d571da5903cb13699,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724961334681439917,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nbcg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b6fbd7-8314-4621-97b9-33e55ba5797f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9253d9c8d0ea7633c2f3277a98a78bf1fe87a553b56a63453809b5eb587a4af0,PodSandboxId:b360bedefdc4253fee7470b465085edbc8f8fe68b90d4191f4671cf8bb3c0c4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724961334618597859,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h6qz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d6839e-102f-4f62-a6bb-2973abdbfc39,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea9d28238a802bd92017d51fb4a7d86879632675ce30c2f03ef1f53599300697,PodSandboxId:5a345d3556a28af490ec205934f27dadfe48e56251400084e176b19f034bc1c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724961334554823068,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61145331-663c-404f-9c46-3eb3bc0cb49a,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f9d9f10866771e532eeb4564f1df6410e9ed5256f0527579efd3c90184a0824,PodSandboxId:2f04c9e7f0430c4b4f7bd123485344242d9a9fc2a0abba9595941976acbab6b1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724961334525986710,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xdb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd7f904d-28eb-4936-b01c-0f50d3d9c122,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e97593be7b5873f114cb6dd2ad4caf2b3c2260b0b81177da3b4994bb0e3bd0,PodSandboxId:fd424287e4cdf4554764a657b45f123a87ffe88b5e9e55b8b19c10e6ac55a86b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724961330776943964,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3047d2e2afb241cbe7dea32bac3d4e39,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1ca0678b248f6b562721904843ac20264ab8cd7b69b73c49d244b14fc3cdd2a,PodSandboxId:0bac65f2da1ad52d116318976f0b2b2730907a42b4d00366b03f19eefad2b17f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724961330747042460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d42c7ee029acec4350042255b5d743,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9550962f5646b8b1fbccdaafc3c325a4cea964f293e546e4400ed68b2d2c043,PodSandboxId:1cdef8ba76f16bbd7d19e5484b2ef8f8424e1ba38df69a210503054574bc2c3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724961330690182534,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 880f3a938902567970348e90e40686dd,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a456c840add35f908f85941763797193b2e5ee9b05c1e8c1a705ea1a443aa8f,PodSandboxId:27cc195a9cf0b2cd52c45e2f61eb7c24fd057f52101c135feebf4f10abfd9519,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724961330492106092,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f51f296d9b2ed8e98f1266232d7508c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bcbe264234acc65c19b7037165725259fa40f8dff1a5ae8b24e2fc8e3f6adc3,PodSandboxId:27cc195a9cf0b2cd52c45e2f61eb7c24fd057f52101c135feebf4f10abfd9519,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724961327640225575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f51f296d9b2ed8e98f1266232d7508c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a80093540b50a0d494482b179c42034ea5e85fd46b8832d41193499b85012a8,PodSandboxId:61d2696b65c89b335d77db0b7d0e6575a22f33365223f0a4c757ade2400dd3c7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724961011618006152,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zglxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf1e6921-597a-4b85-873d-fa6578ac26a7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6416a8f78390db4ac05922480ada0abfe0d1dbe01b32e2153f4cdef264b67a68,PodSandboxId:086d9afeac18d2806c03dd87f24bd2b5f41694e9d4c2e6fb392690ec54b30aa3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724960957883799326,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h6qz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d6839e-102f-4f62-a6bb-2973abdbfc39,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc1b7bdfd5ba3dea8deda18eb3dcf557e0c812d2070d008a379f28ded006055b,PodSandboxId:c4e2accec55d8288ed373c9ab8f492bf1229b5081140ab181efeef9a03b4e3ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724960957848074658,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
61145331-663c-404f-9c46-3eb3bc0cb49a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a811445cc83f2e8b972591baff6b04e9f09f97ba210724a9d8f22c91989fa5d,PodSandboxId:3cbc16c4c4a2b0b42b8700850cec14813eb874de5661b3bb63fc77f2e531ab75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724960945841971568,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nbcg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
0b6fbd7-8314-4621-97b9-33e55ba5797f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e676040af61f22a0c5fce59e3da10bb089a18f9ac5e399ff933823362e94206c,PodSandboxId:ee18d8fb6c071df6376f1a50bfd9ea969a44fe1fb25db3297b0af1f7d4c9aac5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724960943061173837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xdb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd7f904d-28eb-4936-b01c-0f50d3d9c12
2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:286aa4b9e2fe4ed29522efd5138da93f0d64acbe34bd5ee5d2b1cc688451cca2,PodSandboxId:3477e9724991a2f13d4d8f17c591e732e1d458f3ead28d3135abbfd25e33c6f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724960932246677560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d42c7ee029acec4350
042255b5d743,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e91e07c89ec5477b7fdae0dc2885abb50515ad9a90d475f0a4a97b15ec8d337,PodSandboxId:8ae6b3e132ab34030fa07e45d84cf44a206e3f7b3135a92ee6177fd643ed499e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724960932204997681,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3047d2e2afb241cbe7dea32bac3d4e39,},
Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cce534dca4077aaa0eba62fee7ef0b41f7528d06ebd1c677a6741e47b378c6d,PodSandboxId:f63365809437843caca233a45934c73628053b984d171c4bc7fd9ae43363a4bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960932121609997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 880f3a938902567970348e90e40686dd,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=52cfe256-ff4a-47b5-b86e-d3683ba5c1fb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	975d4818818c9       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   1180a709ee431       busybox-7dff88458-zglxg
	2286a2b13ba4f       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   4043b31812665       kindnet-nbcg8
	9253d9c8d0ea7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   b360bedefdc42       coredns-6f6b679f8f-h6qz7
	ea9d28238a802       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   5a345d3556a28       storage-provisioner
	5f9d9f1086677       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      About a minute ago   Running             kube-proxy                1                   2f04c9e7f0430       kube-proxy-4xdb6
	b2e97593be7b5       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      About a minute ago   Running             kube-scheduler            1                   fd424287e4cdf       kube-scheduler-multinode-197790
	d1ca0678b248f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   1                   0bac65f2da1ad       kube-controller-manager-multinode-197790
	e9550962f5646       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Running             kube-apiserver            1                   1cdef8ba76f16       kube-apiserver-multinode-197790
	3a456c840add3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      2                   27cc195a9cf0b       etcd-multinode-197790
	7bcbe264234ac       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Exited              etcd                      1                   27cc195a9cf0b       etcd-multinode-197790
	7a80093540b50       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   61d2696b65c89       busybox-7dff88458-zglxg
	6416a8f78390d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   086d9afeac18d       coredns-6f6b679f8f-h6qz7
	fc1b7bdfd5ba3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   c4e2accec55d8       storage-provisioner
	1a811445cc83f       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    8 minutes ago        Exited              kindnet-cni               0                   3cbc16c4c4a2b       kindnet-nbcg8
	e676040af61f2       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      8 minutes ago        Exited              kube-proxy                0                   ee18d8fb6c071       kube-proxy-4xdb6
	286aa4b9e2fe4       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      8 minutes ago        Exited              kube-controller-manager   0                   3477e9724991a       kube-controller-manager-multinode-197790
	4e91e07c89ec5       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      8 minutes ago        Exited              kube-scheduler            0                   8ae6b3e132ab3       kube-scheduler-multinode-197790
	8cce534dca407       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      8 minutes ago        Exited              kube-apiserver            0                   f633658094378       kube-apiserver-multinode-197790
	
	
	==> coredns [6416a8f78390db4ac05922480ada0abfe0d1dbe01b32e2153f4cdef264b67a68] <==
	[INFO] 10.244.0.3:42970 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00177583s
	[INFO] 10.244.0.3:36660 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000065799s
	[INFO] 10.244.0.3:42818 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000180078s
	[INFO] 10.244.0.3:56882 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001313741s
	[INFO] 10.244.0.3:37647 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064097s
	[INFO] 10.244.0.3:59045 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060341s
	[INFO] 10.244.0.3:60585 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072659s
	[INFO] 10.244.1.2:38661 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133921s
	[INFO] 10.244.1.2:36878 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116694s
	[INFO] 10.244.1.2:43938 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080797s
	[INFO] 10.244.1.2:51009 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008309s
	[INFO] 10.244.0.3:44212 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117287s
	[INFO] 10.244.0.3:52108 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000086192s
	[INFO] 10.244.0.3:39086 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061394s
	[INFO] 10.244.0.3:45833 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058547s
	[INFO] 10.244.1.2:40357 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117551s
	[INFO] 10.244.1.2:40753 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000165399s
	[INFO] 10.244.1.2:40951 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000142249s
	[INFO] 10.244.1.2:46464 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000402424s
	[INFO] 10.244.0.3:45893 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000095092s
	[INFO] 10.244.0.3:50987 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000038924s
	[INFO] 10.244.0.3:59934 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000046287s
	[INFO] 10.244.0.3:54998 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000035255s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9253d9c8d0ea7633c2f3277a98a78bf1fe87a553b56a63453809b5eb587a4af0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46610 - 29536 "HINFO IN 4964473242332634487.4415841945061080405. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024884836s
	
	
	==> describe nodes <==
	Name:               multinode-197790
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-197790
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=multinode-197790
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T19_48_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:48:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-197790
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:57:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:55:33 +0000   Thu, 29 Aug 2024 19:48:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:55:33 +0000   Thu, 29 Aug 2024 19:48:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:55:33 +0000   Thu, 29 Aug 2024 19:48:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:55:33 +0000   Thu, 29 Aug 2024 19:49:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.245
	  Hostname:    multinode-197790
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c8d670b0b9e84226b2a95a61cdccce2f
	  System UUID:                c8d670b0-b9e8-4226-b2a9-5a61cdccce2f
	  Boot ID:                    b1330017-d725-4ae9-bd6f-50f3ee070d30
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-zglxg                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m5s
	  kube-system                 coredns-6f6b679f8f-h6qz7                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m13s
	  kube-system                 etcd-multinode-197790                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m18s
	  kube-system                 kindnet-nbcg8                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m13s
	  kube-system                 kube-apiserver-multinode-197790             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m18s
	  kube-system                 kube-controller-manager-multinode-197790    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m18s
	  kube-system                 kube-proxy-4xdb6                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m13s
	  kube-system                 kube-scheduler-multinode-197790             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m18s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m11s                kube-proxy       
	  Normal  Starting                 100s                 kube-proxy       
	  Normal  NodeHasSufficientPID     8m18s                kubelet          Node multinode-197790 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m18s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m18s                kubelet          Node multinode-197790 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m18s                kubelet          Node multinode-197790 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m18s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m14s                node-controller  Node multinode-197790 event: Registered Node multinode-197790 in Controller
	  Normal  NodeReady                7m58s                kubelet          Node multinode-197790 status is now: NodeReady
	  Normal  Starting                 106s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  105s (x8 over 105s)  kubelet          Node multinode-197790 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s (x8 over 105s)  kubelet          Node multinode-197790 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s (x7 over 105s)  kubelet          Node multinode-197790 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  105s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           99s                  node-controller  Node multinode-197790 event: Registered Node multinode-197790 in Controller
	
	
	Name:               multinode-197790-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-197790-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=multinode-197790
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_29T19_56_15_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:56:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-197790-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:57:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:56:46 +0000   Thu, 29 Aug 2024 19:56:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:56:46 +0000   Thu, 29 Aug 2024 19:56:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:56:46 +0000   Thu, 29 Aug 2024 19:56:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:56:46 +0000   Thu, 29 Aug 2024 19:56:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.247
	  Hostname:    multinode-197790-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad44b7f5b2694c2ca34a341f7e607d3b
	  System UUID:                ad44b7f5-b269-4c2c-a34a-341f7e607d3b
	  Boot ID:                    e4a90c07-34f5-485f-9f54-731dcb96caf5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cq9j4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kindnet-4rd99              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m27s
	  kube-system                 kube-proxy-s65hg           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m21s                  kube-proxy  
	  Normal  Starting                 55s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m27s (x2 over 7m28s)  kubelet     Node multinode-197790-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m27s (x2 over 7m28s)  kubelet     Node multinode-197790-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m27s (x2 over 7m28s)  kubelet     Node multinode-197790-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m27s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m7s                   kubelet     Node multinode-197790-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  60s (x2 over 60s)      kubelet     Node multinode-197790-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x2 over 60s)      kubelet     Node multinode-197790-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x2 over 60s)      kubelet     Node multinode-197790-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  60s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                41s                    kubelet     Node multinode-197790-m02 status is now: NodeReady
	
	
	Name:               multinode-197790-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-197790-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=multinode-197790
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_29T19_56_54_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:56:53 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-197790-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:57:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:57:12 +0000   Thu, 29 Aug 2024 19:56:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:57:12 +0000   Thu, 29 Aug 2024 19:56:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:57:12 +0000   Thu, 29 Aug 2024 19:56:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:57:12 +0000   Thu, 29 Aug 2024 19:57:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.131
	  Hostname:    multinode-197790-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9b5accd3a297410bb9c4a47ffea16c3c
	  System UUID:                9b5accd3-a297-410b-b9c4-a47ffea16c3c
	  Boot ID:                    0c50cb85-e0df-4160-8e72-6c356d9b6ae2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-g2tpt       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m32s
	  kube-system                 kube-proxy-6rnwz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m28s                  kube-proxy  
	  Normal  Starting                 17s                    kube-proxy  
	  Normal  Starting                 5m40s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m32s (x2 over 6m33s)  kubelet     Node multinode-197790-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m32s (x2 over 6m33s)  kubelet     Node multinode-197790-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m32s (x2 over 6m33s)  kubelet     Node multinode-197790-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m32s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m15s                  kubelet     Node multinode-197790-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m45s (x2 over 5m45s)  kubelet     Node multinode-197790-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m45s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m45s (x2 over 5m45s)  kubelet     Node multinode-197790-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m45s (x2 over 5m45s)  kubelet     Node multinode-197790-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m26s                  kubelet     Node multinode-197790-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)      kubelet     Node multinode-197790-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet     Node multinode-197790-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)      kubelet     Node multinode-197790-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-197790-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.058599] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059415] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.187761] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.126228] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.297147] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +3.881115] systemd-fstab-generator[740]: Ignoring "noauto" option for root device
	[  +4.465607] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.060597] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.983475] systemd-fstab-generator[1215]: Ignoring "noauto" option for root device
	[  +0.078083] kauditd_printk_skb: 69 callbacks suppressed
	[Aug29 19:49] systemd-fstab-generator[1331]: Ignoring "noauto" option for root device
	[  +0.110663] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.814796] kauditd_printk_skb: 60 callbacks suppressed
	[Aug29 19:50] kauditd_printk_skb: 12 callbacks suppressed
	[Aug29 19:55] systemd-fstab-generator[2666]: Ignoring "noauto" option for root device
	[  +0.140899] systemd-fstab-generator[2678]: Ignoring "noauto" option for root device
	[  +0.164421] systemd-fstab-generator[2692]: Ignoring "noauto" option for root device
	[  +0.144738] systemd-fstab-generator[2704]: Ignoring "noauto" option for root device
	[  +0.267430] systemd-fstab-generator[2732]: Ignoring "noauto" option for root device
	[  +0.666252] systemd-fstab-generator[2827]: Ignoring "noauto" option for root device
	[  +2.738623] systemd-fstab-generator[3037]: Ignoring "noauto" option for root device
	[  +0.972157] kauditd_printk_skb: 176 callbacks suppressed
	[  +6.490024] kauditd_printk_skb: 47 callbacks suppressed
	[ +12.562968] systemd-fstab-generator[3863]: Ignoring "noauto" option for root device
	[Aug29 19:56] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [3a456c840add35f908f85941763797193b2e5ee9b05c1e8c1a705ea1a443aa8f] <==
	{"level":"info","ts":"2024-08-29T19:55:31.038401Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8f5341249654324","local-member-id":"c66b2a9605a64cb6","added-peer-id":"c66b2a9605a64cb6","added-peer-peer-urls":["https://192.168.39.245:2380"]}
	{"level":"info","ts":"2024-08-29T19:55:31.038486Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8f5341249654324","local-member-id":"c66b2a9605a64cb6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:55:31.038511Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:55:31.045832Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:55:31.048420Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-29T19:55:31.048849Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"c66b2a9605a64cb6","initial-advertise-peer-urls":["https://192.168.39.245:2380"],"listen-peer-urls":["https://192.168.39.245:2380"],"advertise-client-urls":["https://192.168.39.245:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.245:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-29T19:55:31.048553Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.245:2380"}
	{"level":"info","ts":"2024-08-29T19:55:31.049695Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.245:2380"}
	{"level":"info","ts":"2024-08-29T19:55:31.049597Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-29T19:55:32.371009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-29T19:55:32.371168Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-29T19:55:32.371214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 received MsgPreVoteResp from c66b2a9605a64cb6 at term 2"}
	{"level":"info","ts":"2024-08-29T19:55:32.371260Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 became candidate at term 3"}
	{"level":"info","ts":"2024-08-29T19:55:32.371288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 received MsgVoteResp from c66b2a9605a64cb6 at term 3"}
	{"level":"info","ts":"2024-08-29T19:55:32.371322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 became leader at term 3"}
	{"level":"info","ts":"2024-08-29T19:55:32.371354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c66b2a9605a64cb6 elected leader c66b2a9605a64cb6 at term 3"}
	{"level":"info","ts":"2024-08-29T19:55:32.373962Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c66b2a9605a64cb6","local-member-attributes":"{Name:multinode-197790 ClientURLs:[https://192.168.39.245:2379]}","request-path":"/0/members/c66b2a9605a64cb6/attributes","cluster-id":"8f5341249654324","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-29T19:55:32.374057Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T19:55:32.374189Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-29T19:55:32.374226Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-29T19:55:32.374354Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T19:55:32.375592Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:55:32.376465Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.245:2379"}
	{"level":"info","ts":"2024-08-29T19:55:32.377400Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:55:32.378238Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [7bcbe264234acc65c19b7037165725259fa40f8dff1a5ae8b24e2fc8e3f6adc3] <==
	{"level":"info","ts":"2024-08-29T19:55:27.811683Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-08-29T19:55:27.818559Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"8f5341249654324","local-member-id":"c66b2a9605a64cb6","commit-index":929}
	{"level":"info","ts":"2024-08-29T19:55:27.818753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 switched to configuration voters=()"}
	{"level":"info","ts":"2024-08-29T19:55:27.818814Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 became follower at term 2"}
	{"level":"info","ts":"2024-08-29T19:55:27.818850Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft c66b2a9605a64cb6 [peers: [], term: 2, commit: 929, applied: 0, lastindex: 929, lastterm: 2]"}
	{"level":"warn","ts":"2024-08-29T19:55:27.820336Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-08-29T19:55:27.830928Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":854}
	{"level":"info","ts":"2024-08-29T19:55:27.832992Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-08-29T19:55:27.835815Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"c66b2a9605a64cb6","timeout":"7s"}
	{"level":"info","ts":"2024-08-29T19:55:27.836267Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"c66b2a9605a64cb6"}
	{"level":"info","ts":"2024-08-29T19:55:27.836332Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"c66b2a9605a64cb6","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-29T19:55:27.836575Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-29T19:55:27.836770Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-29T19:55:27.836827Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-29T19:55:27.836874Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-29T19:55:27.837431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 switched to configuration voters=(14297568265846017206)"}
	{"level":"info","ts":"2024-08-29T19:55:27.837509Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8f5341249654324","local-member-id":"c66b2a9605a64cb6","added-peer-id":"c66b2a9605a64cb6","added-peer-peer-urls":["https://192.168.39.245:2380"]}
	{"level":"info","ts":"2024-08-29T19:55:27.837643Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8f5341249654324","local-member-id":"c66b2a9605a64cb6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:55:27.837683Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:55:27.842309Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:55:27.844378Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-29T19:55:27.844607Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"c66b2a9605a64cb6","initial-advertise-peer-urls":["https://192.168.39.245:2380"],"listen-peer-urls":["https://192.168.39.245:2380"],"advertise-client-urls":["https://192.168.39.245:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.245:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-29T19:55:27.844645Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-29T19:55:27.845995Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.245:2380"}
	{"level":"info","ts":"2024-08-29T19:55:27.846047Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.245:2380"}
	
	
	==> kernel <==
	 19:57:15 up 8 min,  0 users,  load average: 0.23, 0.26, 0.15
	Linux multinode-197790 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1a811445cc83f2e8b972591baff6b04e9f09f97ba210724a9d8f22c91989fa5d] <==
	I0829 19:53:06.968608       1 main.go:322] Node multinode-197790-m02 has CIDR [10.244.1.0/24] 
	I0829 19:53:16.968556       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0829 19:53:16.968843       1 main.go:299] handling current node
	I0829 19:53:16.968887       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0829 19:53:16.968897       1 main.go:322] Node multinode-197790-m02 has CIDR [10.244.1.0/24] 
	I0829 19:53:16.969188       1 main.go:295] Handling node with IPs: map[192.168.39.131:{}]
	I0829 19:53:16.969216       1 main.go:322] Node multinode-197790-m03 has CIDR [10.244.3.0/24] 
	I0829 19:53:26.973778       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0829 19:53:26.973849       1 main.go:299] handling current node
	I0829 19:53:26.973873       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0829 19:53:26.973879       1 main.go:322] Node multinode-197790-m02 has CIDR [10.244.1.0/24] 
	I0829 19:53:26.974087       1 main.go:295] Handling node with IPs: map[192.168.39.131:{}]
	I0829 19:53:26.974113       1 main.go:322] Node multinode-197790-m03 has CIDR [10.244.3.0/24] 
	I0829 19:53:36.974005       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0829 19:53:36.974122       1 main.go:299] handling current node
	I0829 19:53:36.974163       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0829 19:53:36.974182       1 main.go:322] Node multinode-197790-m02 has CIDR [10.244.1.0/24] 
	I0829 19:53:36.974363       1 main.go:295] Handling node with IPs: map[192.168.39.131:{}]
	I0829 19:53:36.974420       1 main.go:322] Node multinode-197790-m03 has CIDR [10.244.3.0/24] 
	I0829 19:53:46.975231       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0829 19:53:46.975396       1 main.go:322] Node multinode-197790-m02 has CIDR [10.244.1.0/24] 
	I0829 19:53:46.975577       1 main.go:295] Handling node with IPs: map[192.168.39.131:{}]
	I0829 19:53:46.975639       1 main.go:322] Node multinode-197790-m03 has CIDR [10.244.3.0/24] 
	I0829 19:53:46.975862       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0829 19:53:46.975899       1 main.go:299] handling current node
	
	
	==> kindnet [2286a2b13ba4fc4fb058428b09c9dbca4b3c192c351bfa2a21baeed268364dbb] <==
	I0829 19:56:25.699192       1 main.go:322] Node multinode-197790-m03 has CIDR [10.244.3.0/24] 
	I0829 19:56:35.692829       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0829 19:56:35.692885       1 main.go:299] handling current node
	I0829 19:56:35.692904       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0829 19:56:35.692911       1 main.go:322] Node multinode-197790-m02 has CIDR [10.244.1.0/24] 
	I0829 19:56:35.693097       1 main.go:295] Handling node with IPs: map[192.168.39.131:{}]
	I0829 19:56:35.693132       1 main.go:322] Node multinode-197790-m03 has CIDR [10.244.3.0/24] 
	I0829 19:56:45.699963       1 main.go:295] Handling node with IPs: map[192.168.39.131:{}]
	I0829 19:56:45.700100       1 main.go:322] Node multinode-197790-m03 has CIDR [10.244.3.0/24] 
	I0829 19:56:45.700285       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0829 19:56:45.700322       1 main.go:299] handling current node
	I0829 19:56:45.700337       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0829 19:56:45.700345       1 main.go:322] Node multinode-197790-m02 has CIDR [10.244.1.0/24] 
	I0829 19:56:55.692033       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0829 19:56:55.692121       1 main.go:322] Node multinode-197790-m02 has CIDR [10.244.1.0/24] 
	I0829 19:56:55.692234       1 main.go:295] Handling node with IPs: map[192.168.39.131:{}]
	I0829 19:56:55.692261       1 main.go:322] Node multinode-197790-m03 has CIDR [10.244.2.0/24] 
	I0829 19:56:55.692349       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0829 19:56:55.692375       1 main.go:299] handling current node
	I0829 19:57:05.692085       1 main.go:295] Handling node with IPs: map[192.168.39.131:{}]
	I0829 19:57:05.692159       1 main.go:322] Node multinode-197790-m03 has CIDR [10.244.2.0/24] 
	I0829 19:57:05.692337       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0829 19:57:05.692365       1 main.go:299] handling current node
	I0829 19:57:05.692385       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0829 19:57:05.692390       1 main.go:322] Node multinode-197790-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [8cce534dca4077aaa0eba62fee7ef0b41f7528d06ebd1c677a6741e47b378c6d] <==
	I0829 19:48:55.671430       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0829 19:48:56.260885       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0829 19:48:56.303320       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0829 19:48:56.384905       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0829 19:48:56.391683       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.245]
	I0829 19:48:56.392769       1 controller.go:615] quota admission added evaluator for: endpoints
	I0829 19:48:56.398213       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0829 19:48:56.743308       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0829 19:48:57.384866       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0829 19:48:57.398378       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0829 19:48:57.408935       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0829 19:49:02.094992       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0829 19:49:02.444128       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0829 19:50:13.045360       1 conn.go:339] Error on socket receive: read tcp 192.168.39.245:8443->192.168.39.1:33116: use of closed network connection
	E0829 19:50:13.221965       1 conn.go:339] Error on socket receive: read tcp 192.168.39.245:8443->192.168.39.1:33128: use of closed network connection
	E0829 19:50:13.420051       1 conn.go:339] Error on socket receive: read tcp 192.168.39.245:8443->192.168.39.1:33144: use of closed network connection
	E0829 19:50:13.588377       1 conn.go:339] Error on socket receive: read tcp 192.168.39.245:8443->192.168.39.1:33176: use of closed network connection
	E0829 19:50:13.750652       1 conn.go:339] Error on socket receive: read tcp 192.168.39.245:8443->192.168.39.1:33194: use of closed network connection
	E0829 19:50:13.912308       1 conn.go:339] Error on socket receive: read tcp 192.168.39.245:8443->192.168.39.1:33208: use of closed network connection
	E0829 19:50:14.182567       1 conn.go:339] Error on socket receive: read tcp 192.168.39.245:8443->192.168.39.1:33232: use of closed network connection
	E0829 19:50:14.350360       1 conn.go:339] Error on socket receive: read tcp 192.168.39.245:8443->192.168.39.1:33248: use of closed network connection
	E0829 19:50:14.510925       1 conn.go:339] Error on socket receive: read tcp 192.168.39.245:8443->192.168.39.1:33258: use of closed network connection
	E0829 19:50:14.674523       1 conn.go:339] Error on socket receive: read tcp 192.168.39.245:8443->192.168.39.1:33274: use of closed network connection
	I0829 19:53:54.448122       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0829 19:53:54.471287       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e9550962f5646b8b1fbccdaafc3c325a4cea964f293e546e4400ed68b2d2c043] <==
	I0829 19:55:33.710219       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0829 19:55:33.710265       1 policy_source.go:224] refreshing policies
	I0829 19:55:33.734977       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0829 19:55:33.785052       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0829 19:55:33.785331       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0829 19:55:33.786513       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0829 19:55:33.786526       1 shared_informer.go:320] Caches are synced for configmaps
	I0829 19:55:33.786622       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0829 19:55:33.788752       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0829 19:55:33.788783       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0829 19:55:33.792185       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0829 19:55:33.792297       1 aggregator.go:171] initial CRD sync complete...
	I0829 19:55:33.792331       1 autoregister_controller.go:144] Starting autoregister controller
	I0829 19:55:33.792336       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0829 19:55:33.792341       1 cache.go:39] Caches are synced for autoregister controller
	I0829 19:55:33.792927       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0829 19:55:33.801915       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0829 19:55:34.603039       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0829 19:55:35.614289       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0829 19:55:35.767110       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0829 19:55:35.779264       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0829 19:55:35.858322       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0829 19:55:35.864567       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0829 19:55:37.183092       1 controller.go:615] quota admission added evaluator for: endpoints
	I0829 19:55:37.230462       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [286aa4b9e2fe4ed29522efd5138da93f0d64acbe34bd5ee5d2b1cc688451cca2] <==
	I0829 19:51:29.188899       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-197790-m02"
	I0829 19:51:29.189014       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:51:30.452055       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-197790-m02"
	I0829 19:51:30.454009       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-197790-m03\" does not exist"
	I0829 19:51:30.460975       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-197790-m03" podCIDRs=["10.244.3.0/24"]
	I0829 19:51:30.461301       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:51:30.461422       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:51:30.472349       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:51:30.736352       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:51:31.087950       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:51:31.599767       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:51:40.549662       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:51:49.081079       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-197790-m02"
	I0829 19:51:49.081382       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:51:49.093108       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:51:51.563700       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:52:31.582479       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-197790-m02"
	I0829 19:52:31.585976       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:52:31.590914       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m02"
	I0829 19:52:31.620323       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:52:31.620837       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m02"
	I0829 19:52:31.627202       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="19.300972ms"
	I0829 19:52:31.629848       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="35.66µs"
	I0829 19:52:36.716408       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m02"
	I0829 19:52:46.799761       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	
	
	==> kube-controller-manager [d1ca0678b248f6b562721904843ac20264ab8cd7b69b73c49d244b14fc3cdd2a] <==
	I0829 19:56:34.439191       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-197790-m02"
	I0829 19:56:34.452136       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m02"
	I0829 19:56:34.458955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="35.49µs"
	I0829 19:56:34.477391       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.559µs"
	I0829 19:56:35.941221       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.245677ms"
	I0829 19:56:35.941301       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.582µs"
	I0829 19:56:37.032458       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m02"
	I0829 19:56:46.226043       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m02"
	I0829 19:56:52.089127       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:56:52.105104       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:56:52.326814       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-197790-m02"
	I0829 19:56:52.326971       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:56:53.533007       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-197790-m03\" does not exist"
	I0829 19:56:53.536359       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-197790-m02"
	I0829 19:56:53.541814       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-197790-m03" podCIDRs=["10.244.2.0/24"]
	I0829 19:56:53.542787       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:56:53.542933       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:56:53.561576       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:56:53.864992       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:56:54.189160       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:56:57.062203       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:57:03.933107       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:57:12.136595       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:57:12.136789       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-197790-m02"
	I0829 19:57:12.148500       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	
	
	==> kube-proxy [5f9d9f10866771e532eeb4564f1df6410e9ed5256f0527579efd3c90184a0824] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 19:55:34.940363       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 19:55:34.953215       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.245"]
	E0829 19:55:34.953293       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 19:55:35.017047       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 19:55:35.017097       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 19:55:35.017127       1 server_linux.go:169] "Using iptables Proxier"
	I0829 19:55:35.022454       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 19:55:35.024127       1 server.go:483] "Version info" version="v1.31.0"
	I0829 19:55:35.024558       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:55:35.033016       1 config.go:197] "Starting service config controller"
	I0829 19:55:35.033122       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 19:55:35.033199       1 config.go:104] "Starting endpoint slice config controller"
	I0829 19:55:35.033223       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 19:55:35.033828       1 config.go:326] "Starting node config controller"
	I0829 19:55:35.033900       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 19:55:35.134223       1 shared_informer.go:320] Caches are synced for node config
	I0829 19:55:35.134252       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 19:55:35.134257       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [e676040af61f22a0c5fce59e3da10bb089a18f9ac5e399ff933823362e94206c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 19:49:03.588420       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 19:49:03.630697       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.245"]
	E0829 19:49:03.631120       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 19:49:03.689263       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 19:49:03.689298       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 19:49:03.689323       1 server_linux.go:169] "Using iptables Proxier"
	I0829 19:49:03.691918       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 19:49:03.692156       1 server.go:483] "Version info" version="v1.31.0"
	I0829 19:49:03.692172       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:49:03.693981       1 config.go:197] "Starting service config controller"
	I0829 19:49:03.694074       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 19:49:03.694159       1 config.go:104] "Starting endpoint slice config controller"
	I0829 19:49:03.694172       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 19:49:03.695691       1 config.go:326] "Starting node config controller"
	I0829 19:49:03.696530       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 19:49:03.794643       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 19:49:03.794837       1 shared_informer.go:320] Caches are synced for service config
	I0829 19:49:03.798399       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4e91e07c89ec5477b7fdae0dc2885abb50515ad9a90d475f0a4a97b15ec8d337] <==
	E0829 19:48:54.799502       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:48:54.799539       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0829 19:48:54.799584       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 19:48:54.799614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0829 19:48:54.799587       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:48:54.799754       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 19:48:54.799801       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:48:54.799876       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0829 19:48:54.799932       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 19:48:54.800133       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0829 19:48:54.800163       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0829 19:48:54.800195       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0829 19:48:54.800168       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:48:54.800224       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0829 19:48:54.800278       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 19:48:54.800386       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 19:48:54.800480       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 19:48:55.991569       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 19:48:55.992028       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:48:56.000458       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0829 19:48:56.000675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:48:56.011635       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0829 19:48:56.011682       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0829 19:48:56.392880       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0829 19:53:54.455556       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b2e97593be7b5873f114cb6dd2ad4caf2b3c2260b0b81177da3b4994bb0e3bd0] <==
	I0829 19:55:31.502180       1 serving.go:386] Generated self-signed cert in-memory
	W0829 19:55:33.682510       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0829 19:55:33.682555       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0829 19:55:33.682568       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0829 19:55:33.682576       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0829 19:55:33.721185       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0829 19:55:33.721228       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:55:33.740312       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0829 19:55:33.740479       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0829 19:55:33.740529       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0829 19:55:33.740563       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0829 19:55:33.841481       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 19:55:40 multinode-197790 kubelet[3044]: E0829 19:55:40.085617    3044 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961340085372830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:55:40 multinode-197790 kubelet[3044]: E0829 19:55:40.085662    3044 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961340085372830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:55:50 multinode-197790 kubelet[3044]: E0829 19:55:50.088845    3044 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961350087131116,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:55:50 multinode-197790 kubelet[3044]: E0829 19:55:50.088987    3044 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961350087131116,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:56:00 multinode-197790 kubelet[3044]: E0829 19:56:00.090201    3044 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961360089942043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:56:00 multinode-197790 kubelet[3044]: E0829 19:56:00.090237    3044 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961360089942043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:56:10 multinode-197790 kubelet[3044]: E0829 19:56:10.093272    3044 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961370092396253,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:56:10 multinode-197790 kubelet[3044]: E0829 19:56:10.093639    3044 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961370092396253,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:56:20 multinode-197790 kubelet[3044]: E0829 19:56:20.096293    3044 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961380095835735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:56:20 multinode-197790 kubelet[3044]: E0829 19:56:20.096549    3044 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961380095835735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:56:30 multinode-197790 kubelet[3044]: E0829 19:56:30.094163    3044 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 19:56:30 multinode-197790 kubelet[3044]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 19:56:30 multinode-197790 kubelet[3044]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 19:56:30 multinode-197790 kubelet[3044]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 19:56:30 multinode-197790 kubelet[3044]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 19:56:30 multinode-197790 kubelet[3044]: E0829 19:56:30.099493    3044 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961390098979204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:56:30 multinode-197790 kubelet[3044]: E0829 19:56:30.099516    3044 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961390098979204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:56:40 multinode-197790 kubelet[3044]: E0829 19:56:40.101907    3044 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961400101321739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:56:40 multinode-197790 kubelet[3044]: E0829 19:56:40.101946    3044 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961400101321739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:56:50 multinode-197790 kubelet[3044]: E0829 19:56:50.104393    3044 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961410103958333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:56:50 multinode-197790 kubelet[3044]: E0829 19:56:50.104435    3044 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961410103958333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:57:00 multinode-197790 kubelet[3044]: E0829 19:57:00.106783    3044 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961420106393125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:57:00 multinode-197790 kubelet[3044]: E0829 19:57:00.106808    3044 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961420106393125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:57:10 multinode-197790 kubelet[3044]: E0829 19:57:10.109441    3044 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961430108953813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:57:10 multinode-197790 kubelet[3044]: E0829 19:57:10.109787    3044 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961430108953813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 19:57:14.593334   49863 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19530-11185/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-197790 -n multinode-197790
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-197790 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (324.72s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 stop
E0829 19:58:45.975253   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-197790 stop: exit status 82 (2m0.455534918s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-197790-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-197790 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-197790 status: exit status 3 (18.772236773s)

                                                
                                                
-- stdout --
	multinode-197790
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-197790-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 19:59:37.826877   50549 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.247:22: connect: no route to host
	E0829 19:59:37.826923   50549 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.247:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-197790 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-197790 -n multinode-197790
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-197790 logs -n 25: (1.444763821s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-197790 ssh -n                                                                 | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | multinode-197790-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-197790 cp multinode-197790-m02:/home/docker/cp-test.txt                       | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | multinode-197790:/home/docker/cp-test_multinode-197790-m02_multinode-197790.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-197790 ssh -n                                                                 | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | multinode-197790-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-197790 ssh -n multinode-197790 sudo cat                                       | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | /home/docker/cp-test_multinode-197790-m02_multinode-197790.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-197790 cp multinode-197790-m02:/home/docker/cp-test.txt                       | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | multinode-197790-m03:/home/docker/cp-test_multinode-197790-m02_multinode-197790-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-197790 ssh -n                                                                 | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | multinode-197790-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-197790 ssh -n multinode-197790-m03 sudo cat                                   | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | /home/docker/cp-test_multinode-197790-m02_multinode-197790-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-197790 cp testdata/cp-test.txt                                                | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | multinode-197790-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-197790 ssh -n                                                                 | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | multinode-197790-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-197790 cp multinode-197790-m03:/home/docker/cp-test.txt                       | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1846508817/001/cp-test_multinode-197790-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-197790 ssh -n                                                                 | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | multinode-197790-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-197790 cp multinode-197790-m03:/home/docker/cp-test.txt                       | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | multinode-197790:/home/docker/cp-test_multinode-197790-m03_multinode-197790.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-197790 ssh -n                                                                 | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | multinode-197790-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-197790 ssh -n multinode-197790 sudo cat                                       | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | /home/docker/cp-test_multinode-197790-m03_multinode-197790.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-197790 cp multinode-197790-m03:/home/docker/cp-test.txt                       | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | multinode-197790-m02:/home/docker/cp-test_multinode-197790-m03_multinode-197790-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-197790 ssh -n                                                                 | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | multinode-197790-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-197790 ssh -n multinode-197790-m02 sudo cat                                   | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | /home/docker/cp-test_multinode-197790-m03_multinode-197790-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-197790 node stop m03                                                          | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	| node    | multinode-197790 node start                                                             | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC | 29 Aug 24 19:51 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-197790                                                                | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC |                     |
	| stop    | -p multinode-197790                                                                     | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:51 UTC |                     |
	| start   | -p multinode-197790                                                                     | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:53 UTC | 29 Aug 24 19:57 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-197790                                                                | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:57 UTC |                     |
	| node    | multinode-197790 node delete                                                            | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:57 UTC | 29 Aug 24 19:57 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-197790 stop                                                                   | multinode-197790 | jenkins | v1.33.1 | 29 Aug 24 19:57 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 19:53:53
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 19:53:53.575195   48766 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:53:53.575408   48766 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:53:53.575419   48766 out.go:358] Setting ErrFile to fd 2...
	I0829 19:53:53.575430   48766 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:53:53.575651   48766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 19:53:53.576174   48766 out.go:352] Setting JSON to false
	I0829 19:53:53.577072   48766 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5781,"bootTime":1724955453,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 19:53:53.577126   48766 start.go:139] virtualization: kvm guest
	I0829 19:53:53.579146   48766 out.go:177] * [multinode-197790] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 19:53:53.580593   48766 out.go:177]   - MINIKUBE_LOCATION=19530
	I0829 19:53:53.580599   48766 notify.go:220] Checking for updates...
	I0829 19:53:53.583128   48766 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:53:53.584476   48766 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 19:53:53.585844   48766 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 19:53:53.587140   48766 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:53:53.588374   48766 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:53:53.589958   48766 config.go:182] Loaded profile config "multinode-197790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:53:53.590040   48766 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:53:53.590424   48766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:53:53.590460   48766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:53:53.605709   48766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44263
	I0829 19:53:53.606132   48766 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:53:53.606705   48766 main.go:141] libmachine: Using API Version  1
	I0829 19:53:53.606738   48766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:53:53.607141   48766 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:53:53.607321   48766 main.go:141] libmachine: (multinode-197790) Calling .DriverName
	I0829 19:53:53.641844   48766 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 19:53:53.643231   48766 start.go:297] selected driver: kvm2
	I0829 19:53:53.643244   48766 start.go:901] validating driver "kvm2" against &{Name:multinode-197790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-197790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.247 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.131 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:53:53.643401   48766 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:53:53.643702   48766 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:53:53.643782   48766 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19530-11185/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 19:53:53.658191   48766 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 19:53:53.658843   48766 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:53:53.658921   48766 cni.go:84] Creating CNI manager for ""
	I0829 19:53:53.658935   48766 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0829 19:53:53.659007   48766 start.go:340] cluster config:
	{Name:multinode-197790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-197790 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.247 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.131 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:53:53.659139   48766 iso.go:125] acquiring lock: {Name:mk1c9d3ac7f423dd4657884e37bdf4359f6328d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:53:53.660810   48766 out.go:177] * Starting "multinode-197790" primary control-plane node in "multinode-197790" cluster
	I0829 19:53:53.661937   48766 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:53:53.661969   48766 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 19:53:53.661978   48766 cache.go:56] Caching tarball of preloaded images
	I0829 19:53:53.662054   48766 preload.go:172] Found /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 19:53:53.662067   48766 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 19:53:53.662189   48766 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/multinode-197790/config.json ...
	I0829 19:53:53.662386   48766 start.go:360] acquireMachinesLock for multinode-197790: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:53:53.662437   48766 start.go:364] duration metric: took 33.393µs to acquireMachinesLock for "multinode-197790"
	I0829 19:53:53.662457   48766 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:53:53.662466   48766 fix.go:54] fixHost starting: 
	I0829 19:53:53.662790   48766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:53:53.662835   48766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:53:53.676636   48766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44399
	I0829 19:53:53.677066   48766 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:53:53.677543   48766 main.go:141] libmachine: Using API Version  1
	I0829 19:53:53.677579   48766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:53:53.677890   48766 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:53:53.678057   48766 main.go:141] libmachine: (multinode-197790) Calling .DriverName
	I0829 19:53:53.678235   48766 main.go:141] libmachine: (multinode-197790) Calling .GetState
	I0829 19:53:53.679617   48766 fix.go:112] recreateIfNeeded on multinode-197790: state=Running err=<nil>
	W0829 19:53:53.679637   48766 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:53:53.681385   48766 out.go:177] * Updating the running kvm2 "multinode-197790" VM ...
	I0829 19:53:53.682670   48766 machine.go:93] provisionDockerMachine start ...
	I0829 19:53:53.682687   48766 main.go:141] libmachine: (multinode-197790) Calling .DriverName
	I0829 19:53:53.682873   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHHostname
	I0829 19:53:53.684862   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:53:53.685300   48766 main.go:141] libmachine: (multinode-197790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:87:d9", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:48:29 +0000 UTC Type:0 Mac:52:54:00:97:87:d9 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-197790 Clientid:01:52:54:00:97:87:d9}
	I0829 19:53:53.685331   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined IP address 192.168.39.245 and MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:53:53.685440   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHPort
	I0829 19:53:53.685582   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:53:53.685724   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:53:53.685818   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHUsername
	I0829 19:53:53.685986   48766 main.go:141] libmachine: Using SSH client type: native
	I0829 19:53:53.686157   48766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0829 19:53:53.686167   48766 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:53:53.799867   48766 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-197790
	
	I0829 19:53:53.799895   48766 main.go:141] libmachine: (multinode-197790) Calling .GetMachineName
	I0829 19:53:53.800132   48766 buildroot.go:166] provisioning hostname "multinode-197790"
	I0829 19:53:53.800161   48766 main.go:141] libmachine: (multinode-197790) Calling .GetMachineName
	I0829 19:53:53.800344   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHHostname
	I0829 19:53:53.803053   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:53:53.803426   48766 main.go:141] libmachine: (multinode-197790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:87:d9", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:48:29 +0000 UTC Type:0 Mac:52:54:00:97:87:d9 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-197790 Clientid:01:52:54:00:97:87:d9}
	I0829 19:53:53.803452   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined IP address 192.168.39.245 and MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:53:53.803619   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHPort
	I0829 19:53:53.803802   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:53:53.803995   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:53:53.804122   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHUsername
	I0829 19:53:53.804272   48766 main.go:141] libmachine: Using SSH client type: native
	I0829 19:53:53.804480   48766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0829 19:53:53.804498   48766 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-197790 && echo "multinode-197790" | sudo tee /etc/hostname
	I0829 19:53:53.929467   48766 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-197790
	
	I0829 19:53:53.929501   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHHostname
	I0829 19:53:53.932159   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:53:53.932491   48766 main.go:141] libmachine: (multinode-197790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:87:d9", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:48:29 +0000 UTC Type:0 Mac:52:54:00:97:87:d9 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-197790 Clientid:01:52:54:00:97:87:d9}
	I0829 19:53:53.932530   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined IP address 192.168.39.245 and MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:53:53.932671   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHPort
	I0829 19:53:53.932851   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:53:53.933017   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:53:53.933148   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHUsername
	I0829 19:53:53.933298   48766 main.go:141] libmachine: Using SSH client type: native
	I0829 19:53:53.933460   48766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0829 19:53:53.933476   48766 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-197790' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-197790/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-197790' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:53:54.052234   48766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:53:54.052257   48766 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 19:53:54.052285   48766 buildroot.go:174] setting up certificates
	I0829 19:53:54.052294   48766 provision.go:84] configureAuth start
	I0829 19:53:54.052302   48766 main.go:141] libmachine: (multinode-197790) Calling .GetMachineName
	I0829 19:53:54.052534   48766 main.go:141] libmachine: (multinode-197790) Calling .GetIP
	I0829 19:53:54.055298   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:53:54.055670   48766 main.go:141] libmachine: (multinode-197790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:87:d9", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:48:29 +0000 UTC Type:0 Mac:52:54:00:97:87:d9 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-197790 Clientid:01:52:54:00:97:87:d9}
	I0829 19:53:54.055694   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined IP address 192.168.39.245 and MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:53:54.055830   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHHostname
	I0829 19:53:54.057733   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:53:54.058043   48766 main.go:141] libmachine: (multinode-197790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:87:d9", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:48:29 +0000 UTC Type:0 Mac:52:54:00:97:87:d9 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-197790 Clientid:01:52:54:00:97:87:d9}
	I0829 19:53:54.058069   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined IP address 192.168.39.245 and MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:53:54.058217   48766 provision.go:143] copyHostCerts
	I0829 19:53:54.058260   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 19:53:54.058298   48766 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 19:53:54.058315   48766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 19:53:54.058396   48766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 19:53:54.058483   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 19:53:54.058508   48766 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 19:53:54.058515   48766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 19:53:54.058564   48766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 19:53:54.058637   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 19:53:54.058668   48766 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 19:53:54.058677   48766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 19:53:54.058713   48766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 19:53:54.058785   48766 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.multinode-197790 san=[127.0.0.1 192.168.39.245 localhost minikube multinode-197790]
	I0829 19:53:54.148397   48766 provision.go:177] copyRemoteCerts
	I0829 19:53:54.148471   48766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:53:54.148499   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHHostname
	I0829 19:53:54.151138   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:53:54.151520   48766 main.go:141] libmachine: (multinode-197790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:87:d9", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:48:29 +0000 UTC Type:0 Mac:52:54:00:97:87:d9 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-197790 Clientid:01:52:54:00:97:87:d9}
	I0829 19:53:54.151565   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined IP address 192.168.39.245 and MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:53:54.151743   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHPort
	I0829 19:53:54.151930   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:53:54.152098   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHUsername
	I0829 19:53:54.152212   48766 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/multinode-197790/id_rsa Username:docker}
	I0829 19:53:54.240090   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0829 19:53:54.240157   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 19:53:54.266933   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0829 19:53:54.267007   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0829 19:53:54.292415   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0829 19:53:54.292490   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 19:53:54.317048   48766 provision.go:87] duration metric: took 264.744034ms to configureAuth
	I0829 19:53:54.317072   48766 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:53:54.317297   48766 config.go:182] Loaded profile config "multinode-197790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:53:54.317373   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHHostname
	I0829 19:53:54.319882   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:53:54.320196   48766 main.go:141] libmachine: (multinode-197790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:87:d9", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:48:29 +0000 UTC Type:0 Mac:52:54:00:97:87:d9 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-197790 Clientid:01:52:54:00:97:87:d9}
	I0829 19:53:54.320222   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined IP address 192.168.39.245 and MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:53:54.320400   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHPort
	I0829 19:53:54.320583   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:53:54.320738   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:53:54.320920   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHUsername
	I0829 19:53:54.321103   48766 main.go:141] libmachine: Using SSH client type: native
	I0829 19:53:54.321261   48766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0829 19:53:54.321274   48766 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:55:25.031749   48766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:55:25.031776   48766 machine.go:96] duration metric: took 1m31.349092982s to provisionDockerMachine
	I0829 19:55:25.031792   48766 start.go:293] postStartSetup for "multinode-197790" (driver="kvm2")
	I0829 19:55:25.031805   48766 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:55:25.031819   48766 main.go:141] libmachine: (multinode-197790) Calling .DriverName
	I0829 19:55:25.032216   48766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:55:25.032246   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHHostname
	I0829 19:55:25.035315   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:55:25.035817   48766 main.go:141] libmachine: (multinode-197790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:87:d9", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:48:29 +0000 UTC Type:0 Mac:52:54:00:97:87:d9 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-197790 Clientid:01:52:54:00:97:87:d9}
	I0829 19:55:25.035846   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined IP address 192.168.39.245 and MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:55:25.035964   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHPort
	I0829 19:55:25.036193   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:55:25.036351   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHUsername
	I0829 19:55:25.036526   48766 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/multinode-197790/id_rsa Username:docker}
	I0829 19:55:25.122645   48766 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:55:25.126815   48766 command_runner.go:130] > NAME=Buildroot
	I0829 19:55:25.126833   48766 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0829 19:55:25.126838   48766 command_runner.go:130] > ID=buildroot
	I0829 19:55:25.126842   48766 command_runner.go:130] > VERSION_ID=2023.02.9
	I0829 19:55:25.126846   48766 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0829 19:55:25.126934   48766 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:55:25.126953   48766 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 19:55:25.127028   48766 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 19:55:25.127130   48766 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 19:55:25.127143   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> /etc/ssl/certs/183612.pem
	I0829 19:55:25.127235   48766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:55:25.136368   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 19:55:25.160988   48766 start.go:296] duration metric: took 129.181338ms for postStartSetup
	I0829 19:55:25.161024   48766 fix.go:56] duration metric: took 1m31.498559008s for fixHost
	I0829 19:55:25.161043   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHHostname
	I0829 19:55:25.163695   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:55:25.164088   48766 main.go:141] libmachine: (multinode-197790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:87:d9", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:48:29 +0000 UTC Type:0 Mac:52:54:00:97:87:d9 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-197790 Clientid:01:52:54:00:97:87:d9}
	I0829 19:55:25.164122   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined IP address 192.168.39.245 and MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:55:25.164262   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHPort
	I0829 19:55:25.164468   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:55:25.164610   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:55:25.164759   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHUsername
	I0829 19:55:25.164910   48766 main.go:141] libmachine: Using SSH client type: native
	I0829 19:55:25.165099   48766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0829 19:55:25.165111   48766 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:55:25.275508   48766 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724961325.255475484
	
	I0829 19:55:25.275526   48766 fix.go:216] guest clock: 1724961325.255475484
	I0829 19:55:25.275532   48766 fix.go:229] Guest: 2024-08-29 19:55:25.255475484 +0000 UTC Remote: 2024-08-29 19:55:25.161028417 +0000 UTC m=+91.620924107 (delta=94.447067ms)
	I0829 19:55:25.275570   48766 fix.go:200] guest clock delta is within tolerance: 94.447067ms
	I0829 19:55:25.275581   48766 start.go:83] releasing machines lock for "multinode-197790", held for 1m31.613132151s
	I0829 19:55:25.275611   48766 main.go:141] libmachine: (multinode-197790) Calling .DriverName
	I0829 19:55:25.275874   48766 main.go:141] libmachine: (multinode-197790) Calling .GetIP
	I0829 19:55:25.278423   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:55:25.278748   48766 main.go:141] libmachine: (multinode-197790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:87:d9", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:48:29 +0000 UTC Type:0 Mac:52:54:00:97:87:d9 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-197790 Clientid:01:52:54:00:97:87:d9}
	I0829 19:55:25.278774   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined IP address 192.168.39.245 and MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:55:25.278870   48766 main.go:141] libmachine: (multinode-197790) Calling .DriverName
	I0829 19:55:25.279359   48766 main.go:141] libmachine: (multinode-197790) Calling .DriverName
	I0829 19:55:25.279604   48766 main.go:141] libmachine: (multinode-197790) Calling .DriverName
	I0829 19:55:25.279697   48766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:55:25.279725   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHHostname
	I0829 19:55:25.279807   48766 ssh_runner.go:195] Run: cat /version.json
	I0829 19:55:25.279829   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHHostname
	I0829 19:55:25.282233   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:55:25.282351   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:55:25.282677   48766 main.go:141] libmachine: (multinode-197790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:87:d9", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:48:29 +0000 UTC Type:0 Mac:52:54:00:97:87:d9 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-197790 Clientid:01:52:54:00:97:87:d9}
	I0829 19:55:25.282720   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined IP address 192.168.39.245 and MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:55:25.282751   48766 main.go:141] libmachine: (multinode-197790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:87:d9", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:48:29 +0000 UTC Type:0 Mac:52:54:00:97:87:d9 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-197790 Clientid:01:52:54:00:97:87:d9}
	I0829 19:55:25.282779   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined IP address 192.168.39.245 and MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:55:25.282947   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHPort
	I0829 19:55:25.282964   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHPort
	I0829 19:55:25.283126   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:55:25.283132   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:55:25.283287   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHUsername
	I0829 19:55:25.283348   48766 main.go:141] libmachine: (multinode-197790) Calling .GetSSHUsername
	I0829 19:55:25.283454   48766 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/multinode-197790/id_rsa Username:docker}
	I0829 19:55:25.283541   48766 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/multinode-197790/id_rsa Username:docker}
	I0829 19:55:25.385741   48766 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0829 19:55:25.386447   48766 command_runner.go:130] > {"iso_version": "v1.33.1-1724862017-19530", "kicbase_version": "v0.0.44-1724775115-19521", "minikube_version": "v1.33.1", "commit": "0ce952d110f81b7b94ba20c385955675855b59fb"}
	I0829 19:55:25.386600   48766 ssh_runner.go:195] Run: systemctl --version
	I0829 19:55:25.392176   48766 command_runner.go:130] > systemd 252 (252)
	I0829 19:55:25.392216   48766 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0829 19:55:25.392495   48766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:55:25.560917   48766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0829 19:55:25.566779   48766 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0829 19:55:25.566954   48766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:55:25.567006   48766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:55:25.575941   48766 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0829 19:55:25.575958   48766 start.go:495] detecting cgroup driver to use...
	I0829 19:55:25.576027   48766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:55:25.591692   48766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:55:25.604945   48766 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:55:25.604982   48766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:55:25.617955   48766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:55:25.631228   48766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:55:25.770427   48766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:55:25.909041   48766 docker.go:233] disabling docker service ...
	I0829 19:55:25.909112   48766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:55:25.925274   48766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:55:25.938514   48766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:55:26.076421   48766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:55:26.215564   48766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:55:26.229329   48766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:55:26.249041   48766 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0829 19:55:26.249090   48766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:55:26.249145   48766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:55:26.259327   48766 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:55:26.259382   48766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:55:26.269518   48766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:55:26.279712   48766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:55:26.289841   48766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:55:26.300229   48766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:55:26.310109   48766 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:55:26.321138   48766 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:55:26.331252   48766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:55:26.340209   48766 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0829 19:55:26.340402   48766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:55:26.349423   48766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:55:26.480614   48766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:55:26.677782   48766 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:55:26.677847   48766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:55:26.682832   48766 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0829 19:55:26.682849   48766 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0829 19:55:26.682861   48766 command_runner.go:130] > Device: 0,22	Inode: 1322        Links: 1
	I0829 19:55:26.682870   48766 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0829 19:55:26.682877   48766 command_runner.go:130] > Access: 2024-08-29 19:55:26.553972784 +0000
	I0829 19:55:26.682888   48766 command_runner.go:130] > Modify: 2024-08-29 19:55:26.553972784 +0000
	I0829 19:55:26.682895   48766 command_runner.go:130] > Change: 2024-08-29 19:55:26.553972784 +0000
	I0829 19:55:26.682901   48766 command_runner.go:130] >  Birth: -
	I0829 19:55:26.683005   48766 start.go:563] Will wait 60s for crictl version
	I0829 19:55:26.683059   48766 ssh_runner.go:195] Run: which crictl
	I0829 19:55:26.687196   48766 command_runner.go:130] > /usr/bin/crictl
	I0829 19:55:26.687248   48766 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:55:26.732137   48766 command_runner.go:130] > Version:  0.1.0
	I0829 19:55:26.732159   48766 command_runner.go:130] > RuntimeName:  cri-o
	I0829 19:55:26.732166   48766 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0829 19:55:26.732173   48766 command_runner.go:130] > RuntimeApiVersion:  v1
	I0829 19:55:26.732244   48766 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:55:26.732345   48766 ssh_runner.go:195] Run: crio --version
	I0829 19:55:26.766891   48766 command_runner.go:130] > crio version 1.29.1
	I0829 19:55:26.766913   48766 command_runner.go:130] > Version:        1.29.1
	I0829 19:55:26.766921   48766 command_runner.go:130] > GitCommit:      unknown
	I0829 19:55:26.766927   48766 command_runner.go:130] > GitCommitDate:  unknown
	I0829 19:55:26.766932   48766 command_runner.go:130] > GitTreeState:   clean
	I0829 19:55:26.766940   48766 command_runner.go:130] > BuildDate:      2024-08-28T21:33:51Z
	I0829 19:55:26.766946   48766 command_runner.go:130] > GoVersion:      go1.21.6
	I0829 19:55:26.766953   48766 command_runner.go:130] > Compiler:       gc
	I0829 19:55:26.766960   48766 command_runner.go:130] > Platform:       linux/amd64
	I0829 19:55:26.766969   48766 command_runner.go:130] > Linkmode:       dynamic
	I0829 19:55:26.766978   48766 command_runner.go:130] > BuildTags:      
	I0829 19:55:26.766986   48766 command_runner.go:130] >   containers_image_ostree_stub
	I0829 19:55:26.766996   48766 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0829 19:55:26.767003   48766 command_runner.go:130] >   btrfs_noversion
	I0829 19:55:26.767011   48766 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0829 19:55:26.767019   48766 command_runner.go:130] >   libdm_no_deferred_remove
	I0829 19:55:26.767024   48766 command_runner.go:130] >   seccomp
	I0829 19:55:26.767032   48766 command_runner.go:130] > LDFlags:          unknown
	I0829 19:55:26.767041   48766 command_runner.go:130] > SeccompEnabled:   true
	I0829 19:55:26.767048   48766 command_runner.go:130] > AppArmorEnabled:  false
	I0829 19:55:26.767116   48766 ssh_runner.go:195] Run: crio --version
	I0829 19:55:26.794412   48766 command_runner.go:130] > crio version 1.29.1
	I0829 19:55:26.794431   48766 command_runner.go:130] > Version:        1.29.1
	I0829 19:55:26.794439   48766 command_runner.go:130] > GitCommit:      unknown
	I0829 19:55:26.794445   48766 command_runner.go:130] > GitCommitDate:  unknown
	I0829 19:55:26.794450   48766 command_runner.go:130] > GitTreeState:   clean
	I0829 19:55:26.794458   48766 command_runner.go:130] > BuildDate:      2024-08-28T21:33:51Z
	I0829 19:55:26.794464   48766 command_runner.go:130] > GoVersion:      go1.21.6
	I0829 19:55:26.794469   48766 command_runner.go:130] > Compiler:       gc
	I0829 19:55:26.794475   48766 command_runner.go:130] > Platform:       linux/amd64
	I0829 19:55:26.794481   48766 command_runner.go:130] > Linkmode:       dynamic
	I0829 19:55:26.794489   48766 command_runner.go:130] > BuildTags:      
	I0829 19:55:26.794495   48766 command_runner.go:130] >   containers_image_ostree_stub
	I0829 19:55:26.794502   48766 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0829 19:55:26.794512   48766 command_runner.go:130] >   btrfs_noversion
	I0829 19:55:26.794520   48766 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0829 19:55:26.794529   48766 command_runner.go:130] >   libdm_no_deferred_remove
	I0829 19:55:26.794552   48766 command_runner.go:130] >   seccomp
	I0829 19:55:26.794560   48766 command_runner.go:130] > LDFlags:          unknown
	I0829 19:55:26.794567   48766 command_runner.go:130] > SeccompEnabled:   true
	I0829 19:55:26.794574   48766 command_runner.go:130] > AppArmorEnabled:  false
	I0829 19:55:26.797913   48766 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:55:26.799368   48766 main.go:141] libmachine: (multinode-197790) Calling .GetIP
	I0829 19:55:26.801730   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:55:26.802036   48766 main.go:141] libmachine: (multinode-197790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:87:d9", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:48:29 +0000 UTC Type:0 Mac:52:54:00:97:87:d9 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-197790 Clientid:01:52:54:00:97:87:d9}
	I0829 19:55:26.802056   48766 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined IP address 192.168.39.245 and MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:55:26.802241   48766 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 19:55:26.806604   48766 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0829 19:55:26.806822   48766 kubeadm.go:883] updating cluster {Name:multinode-197790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-197790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.247 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.131 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:55:26.806980   48766 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:55:26.807034   48766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:55:26.854814   48766 command_runner.go:130] > {
	I0829 19:55:26.854833   48766 command_runner.go:130] >   "images": [
	I0829 19:55:26.854838   48766 command_runner.go:130] >     {
	I0829 19:55:26.854846   48766 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0829 19:55:26.854851   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.854860   48766 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0829 19:55:26.854866   48766 command_runner.go:130] >       ],
	I0829 19:55:26.854873   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.854885   48766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0829 19:55:26.854896   48766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0829 19:55:26.854900   48766 command_runner.go:130] >       ],
	I0829 19:55:26.854905   48766 command_runner.go:130] >       "size": "87165492",
	I0829 19:55:26.854910   48766 command_runner.go:130] >       "uid": null,
	I0829 19:55:26.854914   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.854920   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.854925   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.854928   48766 command_runner.go:130] >     },
	I0829 19:55:26.854932   48766 command_runner.go:130] >     {
	I0829 19:55:26.854938   48766 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0829 19:55:26.854947   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.854955   48766 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0829 19:55:26.854964   48766 command_runner.go:130] >       ],
	I0829 19:55:26.854971   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.854986   48766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0829 19:55:26.854993   48766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0829 19:55:26.854998   48766 command_runner.go:130] >       ],
	I0829 19:55:26.855002   48766 command_runner.go:130] >       "size": "87190579",
	I0829 19:55:26.855008   48766 command_runner.go:130] >       "uid": null,
	I0829 19:55:26.855020   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.855028   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.855036   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.855045   48766 command_runner.go:130] >     },
	I0829 19:55:26.855051   48766 command_runner.go:130] >     {
	I0829 19:55:26.855063   48766 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0829 19:55:26.855072   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.855081   48766 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0829 19:55:26.855088   48766 command_runner.go:130] >       ],
	I0829 19:55:26.855093   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.855100   48766 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0829 19:55:26.855111   48766 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0829 19:55:26.855117   48766 command_runner.go:130] >       ],
	I0829 19:55:26.855124   48766 command_runner.go:130] >       "size": "1363676",
	I0829 19:55:26.855134   48766 command_runner.go:130] >       "uid": null,
	I0829 19:55:26.855144   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.855152   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.855161   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.855170   48766 command_runner.go:130] >     },
	I0829 19:55:26.855178   48766 command_runner.go:130] >     {
	I0829 19:55:26.855186   48766 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0829 19:55:26.855192   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.855201   48766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0829 19:55:26.855210   48766 command_runner.go:130] >       ],
	I0829 19:55:26.855220   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.855235   48766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0829 19:55:26.855255   48766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0829 19:55:26.855264   48766 command_runner.go:130] >       ],
	I0829 19:55:26.855269   48766 command_runner.go:130] >       "size": "31470524",
	I0829 19:55:26.855273   48766 command_runner.go:130] >       "uid": null,
	I0829 19:55:26.855277   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.855282   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.855291   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.855300   48766 command_runner.go:130] >     },
	I0829 19:55:26.855309   48766 command_runner.go:130] >     {
	I0829 19:55:26.855319   48766 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0829 19:55:26.855329   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.855356   48766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0829 19:55:26.855367   48766 command_runner.go:130] >       ],
	I0829 19:55:26.855374   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.855398   48766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0829 19:55:26.855414   48766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0829 19:55:26.855423   48766 command_runner.go:130] >       ],
	I0829 19:55:26.855433   48766 command_runner.go:130] >       "size": "61245718",
	I0829 19:55:26.855441   48766 command_runner.go:130] >       "uid": null,
	I0829 19:55:26.855448   48766 command_runner.go:130] >       "username": "nonroot",
	I0829 19:55:26.855453   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.855463   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.855472   48766 command_runner.go:130] >     },
	I0829 19:55:26.855481   48766 command_runner.go:130] >     {
	I0829 19:55:26.855493   48766 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0829 19:55:26.855502   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.855513   48766 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0829 19:55:26.855522   48766 command_runner.go:130] >       ],
	I0829 19:55:26.855529   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.855537   48766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0829 19:55:26.855551   48766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0829 19:55:26.855560   48766 command_runner.go:130] >       ],
	I0829 19:55:26.855570   48766 command_runner.go:130] >       "size": "149009664",
	I0829 19:55:26.855579   48766 command_runner.go:130] >       "uid": {
	I0829 19:55:26.855587   48766 command_runner.go:130] >         "value": "0"
	I0829 19:55:26.855596   48766 command_runner.go:130] >       },
	I0829 19:55:26.855605   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.855613   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.855620   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.855624   48766 command_runner.go:130] >     },
	I0829 19:55:26.855635   48766 command_runner.go:130] >     {
	I0829 19:55:26.855649   48766 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0829 19:55:26.855659   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.855669   48766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0829 19:55:26.855677   48766 command_runner.go:130] >       ],
	I0829 19:55:26.855684   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.855699   48766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0829 19:55:26.855709   48766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0829 19:55:26.855720   48766 command_runner.go:130] >       ],
	I0829 19:55:26.855730   48766 command_runner.go:130] >       "size": "95233506",
	I0829 19:55:26.855739   48766 command_runner.go:130] >       "uid": {
	I0829 19:55:26.855748   48766 command_runner.go:130] >         "value": "0"
	I0829 19:55:26.855754   48766 command_runner.go:130] >       },
	I0829 19:55:26.855763   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.855773   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.855782   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.855789   48766 command_runner.go:130] >     },
	I0829 19:55:26.855793   48766 command_runner.go:130] >     {
	I0829 19:55:26.855802   48766 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0829 19:55:26.855812   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.855824   48766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0829 19:55:26.855833   48766 command_runner.go:130] >       ],
	I0829 19:55:26.855842   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.855864   48766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0829 19:55:26.855875   48766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0829 19:55:26.855883   48766 command_runner.go:130] >       ],
	I0829 19:55:26.855894   48766 command_runner.go:130] >       "size": "89437512",
	I0829 19:55:26.855903   48766 command_runner.go:130] >       "uid": {
	I0829 19:55:26.855913   48766 command_runner.go:130] >         "value": "0"
	I0829 19:55:26.855921   48766 command_runner.go:130] >       },
	I0829 19:55:26.855930   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.855936   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.855942   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.855947   48766 command_runner.go:130] >     },
	I0829 19:55:26.855952   48766 command_runner.go:130] >     {
	I0829 19:55:26.855959   48766 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0829 19:55:26.855963   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.855970   48766 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0829 19:55:26.855975   48766 command_runner.go:130] >       ],
	I0829 19:55:26.855982   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.855993   48766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0829 19:55:26.856004   48766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0829 19:55:26.856018   48766 command_runner.go:130] >       ],
	I0829 19:55:26.856029   48766 command_runner.go:130] >       "size": "92728217",
	I0829 19:55:26.856037   48766 command_runner.go:130] >       "uid": null,
	I0829 19:55:26.856045   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.856049   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.856056   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.856064   48766 command_runner.go:130] >     },
	I0829 19:55:26.856072   48766 command_runner.go:130] >     {
	I0829 19:55:26.856082   48766 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0829 19:55:26.856092   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.856103   48766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0829 19:55:26.856111   48766 command_runner.go:130] >       ],
	I0829 19:55:26.856118   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.856130   48766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0829 19:55:26.856146   48766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0829 19:55:26.856155   48766 command_runner.go:130] >       ],
	I0829 19:55:26.856164   48766 command_runner.go:130] >       "size": "68420936",
	I0829 19:55:26.856173   48766 command_runner.go:130] >       "uid": {
	I0829 19:55:26.856182   48766 command_runner.go:130] >         "value": "0"
	I0829 19:55:26.856190   48766 command_runner.go:130] >       },
	I0829 19:55:26.856196   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.856205   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.856212   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.856217   48766 command_runner.go:130] >     },
	I0829 19:55:26.856221   48766 command_runner.go:130] >     {
	I0829 19:55:26.856230   48766 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0829 19:55:26.856240   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.856250   48766 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0829 19:55:26.856258   48766 command_runner.go:130] >       ],
	I0829 19:55:26.856268   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.856282   48766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0829 19:55:26.856295   48766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0829 19:55:26.856301   48766 command_runner.go:130] >       ],
	I0829 19:55:26.856305   48766 command_runner.go:130] >       "size": "742080",
	I0829 19:55:26.856310   48766 command_runner.go:130] >       "uid": {
	I0829 19:55:26.856320   48766 command_runner.go:130] >         "value": "65535"
	I0829 19:55:26.856329   48766 command_runner.go:130] >       },
	I0829 19:55:26.856339   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.856349   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.856359   48766 command_runner.go:130] >       "pinned": true
	I0829 19:55:26.856368   48766 command_runner.go:130] >     }
	I0829 19:55:26.856376   48766 command_runner.go:130] >   ]
	I0829 19:55:26.856383   48766 command_runner.go:130] > }
	I0829 19:55:26.856597   48766 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:55:26.856612   48766 crio.go:433] Images already preloaded, skipping extraction
	I0829 19:55:26.856665   48766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:55:26.888044   48766 command_runner.go:130] > {
	I0829 19:55:26.888065   48766 command_runner.go:130] >   "images": [
	I0829 19:55:26.888071   48766 command_runner.go:130] >     {
	I0829 19:55:26.888084   48766 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0829 19:55:26.888092   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.888104   48766 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0829 19:55:26.888113   48766 command_runner.go:130] >       ],
	I0829 19:55:26.888119   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.888129   48766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0829 19:55:26.888138   48766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0829 19:55:26.888142   48766 command_runner.go:130] >       ],
	I0829 19:55:26.888147   48766 command_runner.go:130] >       "size": "87165492",
	I0829 19:55:26.888151   48766 command_runner.go:130] >       "uid": null,
	I0829 19:55:26.888158   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.888168   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.888178   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.888188   48766 command_runner.go:130] >     },
	I0829 19:55:26.888193   48766 command_runner.go:130] >     {
	I0829 19:55:26.888202   48766 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0829 19:55:26.888206   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.888212   48766 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0829 19:55:26.888216   48766 command_runner.go:130] >       ],
	I0829 19:55:26.888220   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.888227   48766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0829 19:55:26.888237   48766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0829 19:55:26.888240   48766 command_runner.go:130] >       ],
	I0829 19:55:26.888248   48766 command_runner.go:130] >       "size": "87190579",
	I0829 19:55:26.888258   48766 command_runner.go:130] >       "uid": null,
	I0829 19:55:26.888272   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.888281   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.888290   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.888299   48766 command_runner.go:130] >     },
	I0829 19:55:26.888304   48766 command_runner.go:130] >     {
	I0829 19:55:26.888312   48766 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0829 19:55:26.888318   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.888323   48766 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0829 19:55:26.888331   48766 command_runner.go:130] >       ],
	I0829 19:55:26.888341   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.888355   48766 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0829 19:55:26.888370   48766 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0829 19:55:26.888379   48766 command_runner.go:130] >       ],
	I0829 19:55:26.888389   48766 command_runner.go:130] >       "size": "1363676",
	I0829 19:55:26.888399   48766 command_runner.go:130] >       "uid": null,
	I0829 19:55:26.888407   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.888411   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.888420   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.888429   48766 command_runner.go:130] >     },
	I0829 19:55:26.888438   48766 command_runner.go:130] >     {
	I0829 19:55:26.888450   48766 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0829 19:55:26.888459   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.888471   48766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0829 19:55:26.888476   48766 command_runner.go:130] >       ],
	I0829 19:55:26.888485   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.888494   48766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0829 19:55:26.888513   48766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0829 19:55:26.888522   48766 command_runner.go:130] >       ],
	I0829 19:55:26.888529   48766 command_runner.go:130] >       "size": "31470524",
	I0829 19:55:26.888536   48766 command_runner.go:130] >       "uid": null,
	I0829 19:55:26.888546   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.888554   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.888564   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.888571   48766 command_runner.go:130] >     },
	I0829 19:55:26.888575   48766 command_runner.go:130] >     {
	I0829 19:55:26.888587   48766 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0829 19:55:26.888597   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.888608   48766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0829 19:55:26.888614   48766 command_runner.go:130] >       ],
	I0829 19:55:26.888624   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.888647   48766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0829 19:55:26.888659   48766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0829 19:55:26.888666   48766 command_runner.go:130] >       ],
	I0829 19:55:26.888672   48766 command_runner.go:130] >       "size": "61245718",
	I0829 19:55:26.888682   48766 command_runner.go:130] >       "uid": null,
	I0829 19:55:26.888692   48766 command_runner.go:130] >       "username": "nonroot",
	I0829 19:55:26.888701   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.888710   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.888718   48766 command_runner.go:130] >     },
	I0829 19:55:26.888723   48766 command_runner.go:130] >     {
	I0829 19:55:26.888737   48766 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0829 19:55:26.888745   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.888750   48766 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0829 19:55:26.888758   48766 command_runner.go:130] >       ],
	I0829 19:55:26.888767   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.888782   48766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0829 19:55:26.888796   48766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0829 19:55:26.888804   48766 command_runner.go:130] >       ],
	I0829 19:55:26.888813   48766 command_runner.go:130] >       "size": "149009664",
	I0829 19:55:26.888821   48766 command_runner.go:130] >       "uid": {
	I0829 19:55:26.888829   48766 command_runner.go:130] >         "value": "0"
	I0829 19:55:26.888832   48766 command_runner.go:130] >       },
	I0829 19:55:26.888841   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.888850   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.888860   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.888868   48766 command_runner.go:130] >     },
	I0829 19:55:26.888874   48766 command_runner.go:130] >     {
	I0829 19:55:26.888886   48766 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0829 19:55:26.888895   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.888904   48766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0829 19:55:26.888912   48766 command_runner.go:130] >       ],
	I0829 19:55:26.888916   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.888928   48766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0829 19:55:26.888942   48766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0829 19:55:26.888951   48766 command_runner.go:130] >       ],
	I0829 19:55:26.888958   48766 command_runner.go:130] >       "size": "95233506",
	I0829 19:55:26.888966   48766 command_runner.go:130] >       "uid": {
	I0829 19:55:26.888973   48766 command_runner.go:130] >         "value": "0"
	I0829 19:55:26.888981   48766 command_runner.go:130] >       },
	I0829 19:55:26.888988   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.888996   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.889000   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.889006   48766 command_runner.go:130] >     },
	I0829 19:55:26.889012   48766 command_runner.go:130] >     {
	I0829 19:55:26.889025   48766 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0829 19:55:26.889032   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.889044   48766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0829 19:55:26.889050   48766 command_runner.go:130] >       ],
	I0829 19:55:26.889057   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.889077   48766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0829 19:55:26.889094   48766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0829 19:55:26.889103   48766 command_runner.go:130] >       ],
	I0829 19:55:26.889110   48766 command_runner.go:130] >       "size": "89437512",
	I0829 19:55:26.889119   48766 command_runner.go:130] >       "uid": {
	I0829 19:55:26.889125   48766 command_runner.go:130] >         "value": "0"
	I0829 19:55:26.889131   48766 command_runner.go:130] >       },
	I0829 19:55:26.889163   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.889175   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.889181   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.889186   48766 command_runner.go:130] >     },
	I0829 19:55:26.889195   48766 command_runner.go:130] >     {
	I0829 19:55:26.889208   48766 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0829 19:55:26.889217   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.889225   48766 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0829 19:55:26.889233   48766 command_runner.go:130] >       ],
	I0829 19:55:26.889239   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.889253   48766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0829 19:55:26.889267   48766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0829 19:55:26.889276   48766 command_runner.go:130] >       ],
	I0829 19:55:26.889282   48766 command_runner.go:130] >       "size": "92728217",
	I0829 19:55:26.889292   48766 command_runner.go:130] >       "uid": null,
	I0829 19:55:26.889302   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.889310   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.889316   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.889324   48766 command_runner.go:130] >     },
	I0829 19:55:26.889330   48766 command_runner.go:130] >     {
	I0829 19:55:26.889339   48766 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0829 19:55:26.889343   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.889351   48766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0829 19:55:26.889360   48766 command_runner.go:130] >       ],
	I0829 19:55:26.889367   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.889381   48766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0829 19:55:26.889396   48766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0829 19:55:26.889404   48766 command_runner.go:130] >       ],
	I0829 19:55:26.889411   48766 command_runner.go:130] >       "size": "68420936",
	I0829 19:55:26.889419   48766 command_runner.go:130] >       "uid": {
	I0829 19:55:26.889424   48766 command_runner.go:130] >         "value": "0"
	I0829 19:55:26.889429   48766 command_runner.go:130] >       },
	I0829 19:55:26.889435   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.889444   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.889451   48766 command_runner.go:130] >       "pinned": false
	I0829 19:55:26.889459   48766 command_runner.go:130] >     },
	I0829 19:55:26.889465   48766 command_runner.go:130] >     {
	I0829 19:55:26.889480   48766 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0829 19:55:26.889489   48766 command_runner.go:130] >       "repoTags": [
	I0829 19:55:26.889496   48766 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0829 19:55:26.889504   48766 command_runner.go:130] >       ],
	I0829 19:55:26.889509   48766 command_runner.go:130] >       "repoDigests": [
	I0829 19:55:26.889518   48766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0829 19:55:26.889532   48766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0829 19:55:26.889542   48766 command_runner.go:130] >       ],
	I0829 19:55:26.889548   48766 command_runner.go:130] >       "size": "742080",
	I0829 19:55:26.889557   48766 command_runner.go:130] >       "uid": {
	I0829 19:55:26.889564   48766 command_runner.go:130] >         "value": "65535"
	I0829 19:55:26.889572   48766 command_runner.go:130] >       },
	I0829 19:55:26.889579   48766 command_runner.go:130] >       "username": "",
	I0829 19:55:26.889588   48766 command_runner.go:130] >       "spec": null,
	I0829 19:55:26.889595   48766 command_runner.go:130] >       "pinned": true
	I0829 19:55:26.889599   48766 command_runner.go:130] >     }
	I0829 19:55:26.889602   48766 command_runner.go:130] >   ]
	I0829 19:55:26.889607   48766 command_runner.go:130] > }
	I0829 19:55:26.889774   48766 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:55:26.889788   48766 cache_images.go:84] Images are preloaded, skipping loading
	I0829 19:55:26.889799   48766 kubeadm.go:934] updating node { 192.168.39.245 8443 v1.31.0 crio true true} ...
	I0829 19:55:26.889930   48766 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-197790 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-197790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:55:26.890014   48766 ssh_runner.go:195] Run: crio config
	I0829 19:55:26.923009   48766 command_runner.go:130] ! time="2024-08-29 19:55:26.903204041Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0829 19:55:26.928762   48766 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0829 19:55:26.942946   48766 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0829 19:55:26.942974   48766 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0829 19:55:26.942983   48766 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0829 19:55:26.942988   48766 command_runner.go:130] > #
	I0829 19:55:26.943000   48766 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0829 19:55:26.943011   48766 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0829 19:55:26.943020   48766 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0829 19:55:26.943030   48766 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0829 19:55:26.943036   48766 command_runner.go:130] > # reload'.
	I0829 19:55:26.943044   48766 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0829 19:55:26.943056   48766 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0829 19:55:26.943067   48766 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0829 19:55:26.943076   48766 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0829 19:55:26.943086   48766 command_runner.go:130] > [crio]
	I0829 19:55:26.943096   48766 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0829 19:55:26.943106   48766 command_runner.go:130] > # containers images, in this directory.
	I0829 19:55:26.943113   48766 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0829 19:55:26.943132   48766 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0829 19:55:26.943143   48766 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0829 19:55:26.943155   48766 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0829 19:55:26.943165   48766 command_runner.go:130] > # imagestore = ""
	I0829 19:55:26.943174   48766 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0829 19:55:26.943186   48766 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0829 19:55:26.943193   48766 command_runner.go:130] > storage_driver = "overlay"
	I0829 19:55:26.943205   48766 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0829 19:55:26.943218   48766 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0829 19:55:26.943228   48766 command_runner.go:130] > storage_option = [
	I0829 19:55:26.943235   48766 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0829 19:55:26.943242   48766 command_runner.go:130] > ]
	I0829 19:55:26.943253   48766 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0829 19:55:26.943264   48766 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0829 19:55:26.943274   48766 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0829 19:55:26.943290   48766 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0829 19:55:26.943303   48766 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0829 19:55:26.943312   48766 command_runner.go:130] > # always happen on a node reboot
	I0829 19:55:26.943320   48766 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0829 19:55:26.943339   48766 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0829 19:55:26.943352   48766 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0829 19:55:26.943360   48766 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0829 19:55:26.943372   48766 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0829 19:55:26.943387   48766 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0829 19:55:26.943402   48766 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0829 19:55:26.943410   48766 command_runner.go:130] > # internal_wipe = true
	I0829 19:55:26.943421   48766 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0829 19:55:26.943432   48766 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0829 19:55:26.943441   48766 command_runner.go:130] > # internal_repair = false
	I0829 19:55:26.943449   48766 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0829 19:55:26.943462   48766 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0829 19:55:26.943473   48766 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0829 19:55:26.943480   48766 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0829 19:55:26.943492   48766 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0829 19:55:26.943500   48766 command_runner.go:130] > [crio.api]
	I0829 19:55:26.943507   48766 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0829 19:55:26.943516   48766 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0829 19:55:26.943524   48766 command_runner.go:130] > # IP address on which the stream server will listen.
	I0829 19:55:26.943533   48766 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0829 19:55:26.943543   48766 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0829 19:55:26.943551   48766 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0829 19:55:26.943556   48766 command_runner.go:130] > # stream_port = "0"
	I0829 19:55:26.943563   48766 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0829 19:55:26.943571   48766 command_runner.go:130] > # stream_enable_tls = false
	I0829 19:55:26.943594   48766 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0829 19:55:26.943599   48766 command_runner.go:130] > # stream_idle_timeout = ""
	I0829 19:55:26.943605   48766 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0829 19:55:26.943611   48766 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0829 19:55:26.943617   48766 command_runner.go:130] > # minutes.
	I0829 19:55:26.943621   48766 command_runner.go:130] > # stream_tls_cert = ""
	I0829 19:55:26.943627   48766 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0829 19:55:26.943635   48766 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0829 19:55:26.943639   48766 command_runner.go:130] > # stream_tls_key = ""
	I0829 19:55:26.943647   48766 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0829 19:55:26.943653   48766 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0829 19:55:26.943670   48766 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0829 19:55:26.943677   48766 command_runner.go:130] > # stream_tls_ca = ""
	I0829 19:55:26.943684   48766 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0829 19:55:26.943691   48766 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0829 19:55:26.943698   48766 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0829 19:55:26.943704   48766 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0829 19:55:26.943711   48766 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0829 19:55:26.943718   48766 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0829 19:55:26.943722   48766 command_runner.go:130] > [crio.runtime]
	I0829 19:55:26.943730   48766 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0829 19:55:26.943735   48766 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0829 19:55:26.943741   48766 command_runner.go:130] > # "nofile=1024:2048"
	I0829 19:55:26.943747   48766 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0829 19:55:26.943753   48766 command_runner.go:130] > # default_ulimits = [
	I0829 19:55:26.943756   48766 command_runner.go:130] > # ]
	I0829 19:55:26.943761   48766 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0829 19:55:26.943767   48766 command_runner.go:130] > # no_pivot = false
	I0829 19:55:26.943773   48766 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0829 19:55:26.943779   48766 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0829 19:55:26.943784   48766 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0829 19:55:26.943790   48766 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0829 19:55:26.943797   48766 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0829 19:55:26.943803   48766 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0829 19:55:26.943809   48766 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0829 19:55:26.943814   48766 command_runner.go:130] > # Cgroup setting for conmon
	I0829 19:55:26.943824   48766 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0829 19:55:26.943830   48766 command_runner.go:130] > conmon_cgroup = "pod"
	I0829 19:55:26.943839   48766 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0829 19:55:26.943843   48766 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0829 19:55:26.943852   48766 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0829 19:55:26.943856   48766 command_runner.go:130] > conmon_env = [
	I0829 19:55:26.943862   48766 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0829 19:55:26.943866   48766 command_runner.go:130] > ]
	I0829 19:55:26.943871   48766 command_runner.go:130] > # Additional environment variables to set for all the
	I0829 19:55:26.943878   48766 command_runner.go:130] > # containers. These are overridden if set in the
	I0829 19:55:26.943884   48766 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0829 19:55:26.943889   48766 command_runner.go:130] > # default_env = [
	I0829 19:55:26.943893   48766 command_runner.go:130] > # ]
	I0829 19:55:26.943898   48766 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0829 19:55:26.943906   48766 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0829 19:55:26.943912   48766 command_runner.go:130] > # selinux = false
	I0829 19:55:26.943918   48766 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0829 19:55:26.943927   48766 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0829 19:55:26.943933   48766 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0829 19:55:26.943937   48766 command_runner.go:130] > # seccomp_profile = ""
	I0829 19:55:26.943942   48766 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0829 19:55:26.943950   48766 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0829 19:55:26.943964   48766 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0829 19:55:26.943971   48766 command_runner.go:130] > # which might increase security.
	I0829 19:55:26.943975   48766 command_runner.go:130] > # This option is currently deprecated,
	I0829 19:55:26.943980   48766 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0829 19:55:26.943987   48766 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0829 19:55:26.943993   48766 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0829 19:55:26.944001   48766 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0829 19:55:26.944007   48766 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0829 19:55:26.944015   48766 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0829 19:55:26.944019   48766 command_runner.go:130] > # This option supports live configuration reload.
	I0829 19:55:26.944026   48766 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0829 19:55:26.944031   48766 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0829 19:55:26.944037   48766 command_runner.go:130] > # the cgroup blockio controller.
	I0829 19:55:26.944043   48766 command_runner.go:130] > # blockio_config_file = ""
	I0829 19:55:26.944052   48766 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0829 19:55:26.944055   48766 command_runner.go:130] > # blockio parameters.
	I0829 19:55:26.944059   48766 command_runner.go:130] > # blockio_reload = false
	I0829 19:55:26.944065   48766 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0829 19:55:26.944073   48766 command_runner.go:130] > # irqbalance daemon.
	I0829 19:55:26.944078   48766 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0829 19:55:26.944084   48766 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0829 19:55:26.944092   48766 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0829 19:55:26.944099   48766 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0829 19:55:26.944107   48766 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0829 19:55:26.944114   48766 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0829 19:55:26.944121   48766 command_runner.go:130] > # This option supports live configuration reload.
	I0829 19:55:26.944126   48766 command_runner.go:130] > # rdt_config_file = ""
	I0829 19:55:26.944133   48766 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0829 19:55:26.944137   48766 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0829 19:55:26.944183   48766 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0829 19:55:26.944197   48766 command_runner.go:130] > # separate_pull_cgroup = ""
	I0829 19:55:26.944206   48766 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0829 19:55:26.944219   48766 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0829 19:55:26.944228   48766 command_runner.go:130] > # will be added.
	I0829 19:55:26.944238   48766 command_runner.go:130] > # default_capabilities = [
	I0829 19:55:26.944244   48766 command_runner.go:130] > # 	"CHOWN",
	I0829 19:55:26.944251   48766 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0829 19:55:26.944255   48766 command_runner.go:130] > # 	"FSETID",
	I0829 19:55:26.944258   48766 command_runner.go:130] > # 	"FOWNER",
	I0829 19:55:26.944264   48766 command_runner.go:130] > # 	"SETGID",
	I0829 19:55:26.944268   48766 command_runner.go:130] > # 	"SETUID",
	I0829 19:55:26.944271   48766 command_runner.go:130] > # 	"SETPCAP",
	I0829 19:55:26.944276   48766 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0829 19:55:26.944280   48766 command_runner.go:130] > # 	"KILL",
	I0829 19:55:26.944283   48766 command_runner.go:130] > # ]
	I0829 19:55:26.944291   48766 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0829 19:55:26.944300   48766 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0829 19:55:26.944304   48766 command_runner.go:130] > # add_inheritable_capabilities = false
	I0829 19:55:26.944310   48766 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0829 19:55:26.944317   48766 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0829 19:55:26.944325   48766 command_runner.go:130] > default_sysctls = [
	I0829 19:55:26.944329   48766 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0829 19:55:26.944335   48766 command_runner.go:130] > ]
	I0829 19:55:26.944339   48766 command_runner.go:130] > # List of devices on the host that a
	I0829 19:55:26.944345   48766 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0829 19:55:26.944350   48766 command_runner.go:130] > # allowed_devices = [
	I0829 19:55:26.944354   48766 command_runner.go:130] > # 	"/dev/fuse",
	I0829 19:55:26.944359   48766 command_runner.go:130] > # ]
	I0829 19:55:26.944364   48766 command_runner.go:130] > # List of additional devices. specified as
	I0829 19:55:26.944371   48766 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0829 19:55:26.944378   48766 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0829 19:55:26.944384   48766 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0829 19:55:26.944390   48766 command_runner.go:130] > # additional_devices = [
	I0829 19:55:26.944392   48766 command_runner.go:130] > # ]
	I0829 19:55:26.944398   48766 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0829 19:55:26.944403   48766 command_runner.go:130] > # cdi_spec_dirs = [
	I0829 19:55:26.944407   48766 command_runner.go:130] > # 	"/etc/cdi",
	I0829 19:55:26.944411   48766 command_runner.go:130] > # 	"/var/run/cdi",
	I0829 19:55:26.944414   48766 command_runner.go:130] > # ]
	I0829 19:55:26.944419   48766 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0829 19:55:26.944427   48766 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0829 19:55:26.944431   48766 command_runner.go:130] > # Defaults to false.
	I0829 19:55:26.944436   48766 command_runner.go:130] > # device_ownership_from_security_context = false
	I0829 19:55:26.944444   48766 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0829 19:55:26.944450   48766 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0829 19:55:26.944453   48766 command_runner.go:130] > # hooks_dir = [
	I0829 19:55:26.944457   48766 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0829 19:55:26.944461   48766 command_runner.go:130] > # ]
	I0829 19:55:26.944466   48766 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0829 19:55:26.944474   48766 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0829 19:55:26.944479   48766 command_runner.go:130] > # its default mounts from the following two files:
	I0829 19:55:26.944484   48766 command_runner.go:130] > #
	I0829 19:55:26.944490   48766 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0829 19:55:26.944498   48766 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0829 19:55:26.944503   48766 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0829 19:55:26.944510   48766 command_runner.go:130] > #
	I0829 19:55:26.944516   48766 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0829 19:55:26.944524   48766 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0829 19:55:26.944530   48766 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0829 19:55:26.944537   48766 command_runner.go:130] > #      only add mounts it finds in this file.
	I0829 19:55:26.944540   48766 command_runner.go:130] > #
	I0829 19:55:26.944544   48766 command_runner.go:130] > # default_mounts_file = ""
	I0829 19:55:26.944548   48766 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0829 19:55:26.944557   48766 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0829 19:55:26.944561   48766 command_runner.go:130] > pids_limit = 1024
	I0829 19:55:26.944566   48766 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0829 19:55:26.944577   48766 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0829 19:55:26.944585   48766 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0829 19:55:26.944592   48766 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0829 19:55:26.944610   48766 command_runner.go:130] > # log_size_max = -1
	I0829 19:55:26.944621   48766 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0829 19:55:26.944626   48766 command_runner.go:130] > # log_to_journald = false
	I0829 19:55:26.944631   48766 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0829 19:55:26.944639   48766 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0829 19:55:26.944644   48766 command_runner.go:130] > # Path to directory for container attach sockets.
	I0829 19:55:26.944648   48766 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0829 19:55:26.944653   48766 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0829 19:55:26.944659   48766 command_runner.go:130] > # bind_mount_prefix = ""
	I0829 19:55:26.944664   48766 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0829 19:55:26.944668   48766 command_runner.go:130] > # read_only = false
	I0829 19:55:26.944674   48766 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0829 19:55:26.944681   48766 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0829 19:55:26.944686   48766 command_runner.go:130] > # live configuration reload.
	I0829 19:55:26.944690   48766 command_runner.go:130] > # log_level = "info"
	I0829 19:55:26.944695   48766 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0829 19:55:26.944702   48766 command_runner.go:130] > # This option supports live configuration reload.
	I0829 19:55:26.944706   48766 command_runner.go:130] > # log_filter = ""
	I0829 19:55:26.944714   48766 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0829 19:55:26.944722   48766 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0829 19:55:26.944727   48766 command_runner.go:130] > # separated by comma.
	I0829 19:55:26.944735   48766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0829 19:55:26.944742   48766 command_runner.go:130] > # uid_mappings = ""
	I0829 19:55:26.944748   48766 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0829 19:55:26.944754   48766 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0829 19:55:26.944760   48766 command_runner.go:130] > # separated by comma.
	I0829 19:55:26.944768   48766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0829 19:55:26.944774   48766 command_runner.go:130] > # gid_mappings = ""
	I0829 19:55:26.944780   48766 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0829 19:55:26.944786   48766 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0829 19:55:26.944792   48766 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0829 19:55:26.944802   48766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0829 19:55:26.944807   48766 command_runner.go:130] > # minimum_mappable_uid = -1
	I0829 19:55:26.944813   48766 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0829 19:55:26.944821   48766 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0829 19:55:26.944828   48766 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0829 19:55:26.944838   48766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0829 19:55:26.944842   48766 command_runner.go:130] > # minimum_mappable_gid = -1
	I0829 19:55:26.944847   48766 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0829 19:55:26.944855   48766 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0829 19:55:26.944861   48766 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0829 19:55:26.944867   48766 command_runner.go:130] > # ctr_stop_timeout = 30
	I0829 19:55:26.944872   48766 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0829 19:55:26.944880   48766 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0829 19:55:26.944885   48766 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0829 19:55:26.944890   48766 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0829 19:55:26.944894   48766 command_runner.go:130] > drop_infra_ctr = false
	I0829 19:55:26.944901   48766 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0829 19:55:26.944906   48766 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0829 19:55:26.944915   48766 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0829 19:55:26.944920   48766 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0829 19:55:26.944929   48766 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0829 19:55:26.944934   48766 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0829 19:55:26.944942   48766 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0829 19:55:26.944947   48766 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0829 19:55:26.944952   48766 command_runner.go:130] > # shared_cpuset = ""
	I0829 19:55:26.944958   48766 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0829 19:55:26.944965   48766 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0829 19:55:26.944970   48766 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0829 19:55:26.944979   48766 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0829 19:55:26.944984   48766 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0829 19:55:26.944989   48766 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0829 19:55:26.944997   48766 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0829 19:55:26.945001   48766 command_runner.go:130] > # enable_criu_support = false
	I0829 19:55:26.945008   48766 command_runner.go:130] > # Enable/disable the generation of the container,
	I0829 19:55:26.945016   48766 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0829 19:55:26.945022   48766 command_runner.go:130] > # enable_pod_events = false
	I0829 19:55:26.945027   48766 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0829 19:55:26.945033   48766 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0829 19:55:26.945038   48766 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0829 19:55:26.945044   48766 command_runner.go:130] > # default_runtime = "runc"
	I0829 19:55:26.945049   48766 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0829 19:55:26.945056   48766 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0829 19:55:26.945066   48766 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0829 19:55:26.945074   48766 command_runner.go:130] > # creation as a file is not desired either.
	I0829 19:55:26.945082   48766 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0829 19:55:26.945089   48766 command_runner.go:130] > # the hostname is being managed dynamically.
	I0829 19:55:26.945094   48766 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0829 19:55:26.945099   48766 command_runner.go:130] > # ]
	I0829 19:55:26.945106   48766 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0829 19:55:26.945113   48766 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0829 19:55:26.945119   48766 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0829 19:55:26.945126   48766 command_runner.go:130] > # Each entry in the table should follow the format:
	I0829 19:55:26.945129   48766 command_runner.go:130] > #
	I0829 19:55:26.945133   48766 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0829 19:55:26.945137   48766 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0829 19:55:26.945160   48766 command_runner.go:130] > # runtime_type = "oci"
	I0829 19:55:26.945171   48766 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0829 19:55:26.945181   48766 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0829 19:55:26.945190   48766 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0829 19:55:26.945198   48766 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0829 19:55:26.945206   48766 command_runner.go:130] > # monitor_env = []
	I0829 19:55:26.945218   48766 command_runner.go:130] > # privileged_without_host_devices = false
	I0829 19:55:26.945227   48766 command_runner.go:130] > # allowed_annotations = []
	I0829 19:55:26.945236   48766 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0829 19:55:26.945246   48766 command_runner.go:130] > # Where:
	I0829 19:55:26.945253   48766 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0829 19:55:26.945262   48766 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0829 19:55:26.945269   48766 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0829 19:55:26.945277   48766 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0829 19:55:26.945281   48766 command_runner.go:130] > #   in $PATH.
	I0829 19:55:26.945289   48766 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0829 19:55:26.945294   48766 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0829 19:55:26.945302   48766 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0829 19:55:26.945306   48766 command_runner.go:130] > #   state.
	I0829 19:55:26.945312   48766 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0829 19:55:26.945320   48766 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0829 19:55:26.945326   48766 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0829 19:55:26.945334   48766 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0829 19:55:26.945340   48766 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0829 19:55:26.945348   48766 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0829 19:55:26.945353   48766 command_runner.go:130] > #   The currently recognized values are:
	I0829 19:55:26.945359   48766 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0829 19:55:26.945366   48766 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0829 19:55:26.945374   48766 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0829 19:55:26.945380   48766 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0829 19:55:26.945391   48766 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0829 19:55:26.945399   48766 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0829 19:55:26.945406   48766 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0829 19:55:26.945414   48766 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0829 19:55:26.945420   48766 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0829 19:55:26.945426   48766 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0829 19:55:26.945435   48766 command_runner.go:130] > #   deprecated option "conmon".
	I0829 19:55:26.945442   48766 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0829 19:55:26.945449   48766 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0829 19:55:26.945455   48766 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0829 19:55:26.945462   48766 command_runner.go:130] > #   should be moved to the container's cgroup
	I0829 19:55:26.945469   48766 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0829 19:55:26.945476   48766 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0829 19:55:26.945482   48766 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0829 19:55:26.945492   48766 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0829 19:55:26.945497   48766 command_runner.go:130] > #
	I0829 19:55:26.945502   48766 command_runner.go:130] > # Using the seccomp notifier feature:
	I0829 19:55:26.945507   48766 command_runner.go:130] > #
	I0829 19:55:26.945513   48766 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0829 19:55:26.945523   48766 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0829 19:55:26.945527   48766 command_runner.go:130] > #
	I0829 19:55:26.945533   48766 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0829 19:55:26.945543   48766 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0829 19:55:26.945546   48766 command_runner.go:130] > #
	I0829 19:55:26.945554   48766 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0829 19:55:26.945558   48766 command_runner.go:130] > # feature.
	I0829 19:55:26.945561   48766 command_runner.go:130] > #
	I0829 19:55:26.945566   48766 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0829 19:55:26.945578   48766 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0829 19:55:26.945587   48766 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0829 19:55:26.945593   48766 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0829 19:55:26.945601   48766 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0829 19:55:26.945604   48766 command_runner.go:130] > #
	I0829 19:55:26.945611   48766 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0829 19:55:26.945619   48766 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0829 19:55:26.945622   48766 command_runner.go:130] > #
	I0829 19:55:26.945628   48766 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0829 19:55:26.945636   48766 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0829 19:55:26.945639   48766 command_runner.go:130] > #
	I0829 19:55:26.945645   48766 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0829 19:55:26.945653   48766 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0829 19:55:26.945656   48766 command_runner.go:130] > # limitation.
	I0829 19:55:26.945664   48766 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0829 19:55:26.945669   48766 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0829 19:55:26.945673   48766 command_runner.go:130] > runtime_type = "oci"
	I0829 19:55:26.945678   48766 command_runner.go:130] > runtime_root = "/run/runc"
	I0829 19:55:26.945684   48766 command_runner.go:130] > runtime_config_path = ""
	I0829 19:55:26.945689   48766 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0829 19:55:26.945696   48766 command_runner.go:130] > monitor_cgroup = "pod"
	I0829 19:55:26.945699   48766 command_runner.go:130] > monitor_exec_cgroup = ""
	I0829 19:55:26.945705   48766 command_runner.go:130] > monitor_env = [
	I0829 19:55:26.945713   48766 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0829 19:55:26.945717   48766 command_runner.go:130] > ]
	I0829 19:55:26.945721   48766 command_runner.go:130] > privileged_without_host_devices = false
	I0829 19:55:26.945729   48766 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0829 19:55:26.945734   48766 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0829 19:55:26.945743   48766 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0829 19:55:26.945750   48766 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0829 19:55:26.945759   48766 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0829 19:55:26.945765   48766 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0829 19:55:26.945774   48766 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0829 19:55:26.945783   48766 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0829 19:55:26.945788   48766 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0829 19:55:26.945795   48766 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0829 19:55:26.945798   48766 command_runner.go:130] > # Example:
	I0829 19:55:26.945803   48766 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0829 19:55:26.945808   48766 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0829 19:55:26.945812   48766 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0829 19:55:26.945817   48766 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0829 19:55:26.945820   48766 command_runner.go:130] > # cpuset = 0
	I0829 19:55:26.945824   48766 command_runner.go:130] > # cpushares = "0-1"
	I0829 19:55:26.945828   48766 command_runner.go:130] > # Where:
	I0829 19:55:26.945832   48766 command_runner.go:130] > # The workload name is workload-type.
	I0829 19:55:26.945838   48766 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0829 19:55:26.945843   48766 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0829 19:55:26.945848   48766 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0829 19:55:26.945855   48766 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0829 19:55:26.945861   48766 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0829 19:55:26.945865   48766 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0829 19:55:26.945871   48766 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0829 19:55:26.945875   48766 command_runner.go:130] > # Default value is set to true
	I0829 19:55:26.945879   48766 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0829 19:55:26.945884   48766 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0829 19:55:26.945889   48766 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0829 19:55:26.945893   48766 command_runner.go:130] > # Default value is set to 'false'
	I0829 19:55:26.945897   48766 command_runner.go:130] > # disable_hostport_mapping = false
	I0829 19:55:26.945903   48766 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0829 19:55:26.945906   48766 command_runner.go:130] > #
	I0829 19:55:26.945912   48766 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0829 19:55:26.945918   48766 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0829 19:55:26.945924   48766 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0829 19:55:26.945929   48766 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0829 19:55:26.945934   48766 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0829 19:55:26.945938   48766 command_runner.go:130] > [crio.image]
	I0829 19:55:26.945943   48766 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0829 19:55:26.945947   48766 command_runner.go:130] > # default_transport = "docker://"
	I0829 19:55:26.945952   48766 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0829 19:55:26.945958   48766 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0829 19:55:26.945964   48766 command_runner.go:130] > # global_auth_file = ""
	I0829 19:55:26.945969   48766 command_runner.go:130] > # The image used to instantiate infra containers.
	I0829 19:55:26.945973   48766 command_runner.go:130] > # This option supports live configuration reload.
	I0829 19:55:26.945977   48766 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0829 19:55:26.945983   48766 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0829 19:55:26.945988   48766 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0829 19:55:26.945993   48766 command_runner.go:130] > # This option supports live configuration reload.
	I0829 19:55:26.945997   48766 command_runner.go:130] > # pause_image_auth_file = ""
	I0829 19:55:26.946002   48766 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0829 19:55:26.946008   48766 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0829 19:55:26.946013   48766 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0829 19:55:26.946019   48766 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0829 19:55:26.946022   48766 command_runner.go:130] > # pause_command = "/pause"
	I0829 19:55:26.946027   48766 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0829 19:55:26.946033   48766 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0829 19:55:26.946039   48766 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0829 19:55:26.946046   48766 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0829 19:55:26.946052   48766 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0829 19:55:26.946058   48766 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0829 19:55:26.946065   48766 command_runner.go:130] > # pinned_images = [
	I0829 19:55:26.946068   48766 command_runner.go:130] > # ]
	I0829 19:55:26.946074   48766 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0829 19:55:26.946080   48766 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0829 19:55:26.946086   48766 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0829 19:55:26.946095   48766 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0829 19:55:26.946100   48766 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0829 19:55:26.946104   48766 command_runner.go:130] > # signature_policy = ""
	I0829 19:55:26.946109   48766 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0829 19:55:26.946119   48766 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0829 19:55:26.946125   48766 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0829 19:55:26.946133   48766 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0829 19:55:26.946139   48766 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0829 19:55:26.946146   48766 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0829 19:55:26.946154   48766 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0829 19:55:26.946167   48766 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0829 19:55:26.946176   48766 command_runner.go:130] > # changing them here.
	I0829 19:55:26.946183   48766 command_runner.go:130] > # insecure_registries = [
	I0829 19:55:26.946188   48766 command_runner.go:130] > # ]
	I0829 19:55:26.946199   48766 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0829 19:55:26.946209   48766 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0829 19:55:26.946216   48766 command_runner.go:130] > # image_volumes = "mkdir"
	I0829 19:55:26.946226   48766 command_runner.go:130] > # Temporary directory to use for storing big files
	I0829 19:55:26.946234   48766 command_runner.go:130] > # big_files_temporary_dir = ""
	I0829 19:55:26.946246   48766 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0829 19:55:26.946253   48766 command_runner.go:130] > # CNI plugins.
	I0829 19:55:26.946257   48766 command_runner.go:130] > [crio.network]
	I0829 19:55:26.946265   48766 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0829 19:55:26.946271   48766 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0829 19:55:26.946277   48766 command_runner.go:130] > # cni_default_network = ""
	I0829 19:55:26.946282   48766 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0829 19:55:26.946289   48766 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0829 19:55:26.946294   48766 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0829 19:55:26.946300   48766 command_runner.go:130] > # plugin_dirs = [
	I0829 19:55:26.946305   48766 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0829 19:55:26.946310   48766 command_runner.go:130] > # ]
	I0829 19:55:26.946315   48766 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0829 19:55:26.946321   48766 command_runner.go:130] > [crio.metrics]
	I0829 19:55:26.946326   48766 command_runner.go:130] > # Globally enable or disable metrics support.
	I0829 19:55:26.946330   48766 command_runner.go:130] > enable_metrics = true
	I0829 19:55:26.946335   48766 command_runner.go:130] > # Specify enabled metrics collectors.
	I0829 19:55:26.946340   48766 command_runner.go:130] > # Per default all metrics are enabled.
	I0829 19:55:26.946348   48766 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0829 19:55:26.946354   48766 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0829 19:55:26.946362   48766 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0829 19:55:26.946366   48766 command_runner.go:130] > # metrics_collectors = [
	I0829 19:55:26.946373   48766 command_runner.go:130] > # 	"operations",
	I0829 19:55:26.946377   48766 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0829 19:55:26.946381   48766 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0829 19:55:26.946385   48766 command_runner.go:130] > # 	"operations_errors",
	I0829 19:55:26.946389   48766 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0829 19:55:26.946394   48766 command_runner.go:130] > # 	"image_pulls_by_name",
	I0829 19:55:26.946398   48766 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0829 19:55:26.946404   48766 command_runner.go:130] > # 	"image_pulls_failures",
	I0829 19:55:26.946409   48766 command_runner.go:130] > # 	"image_pulls_successes",
	I0829 19:55:26.946415   48766 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0829 19:55:26.946419   48766 command_runner.go:130] > # 	"image_layer_reuse",
	I0829 19:55:26.946424   48766 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0829 19:55:26.946429   48766 command_runner.go:130] > # 	"containers_oom_total",
	I0829 19:55:26.946435   48766 command_runner.go:130] > # 	"containers_oom",
	I0829 19:55:26.946441   48766 command_runner.go:130] > # 	"processes_defunct",
	I0829 19:55:26.946445   48766 command_runner.go:130] > # 	"operations_total",
	I0829 19:55:26.946449   48766 command_runner.go:130] > # 	"operations_latency_seconds",
	I0829 19:55:26.946454   48766 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0829 19:55:26.946460   48766 command_runner.go:130] > # 	"operations_errors_total",
	I0829 19:55:26.946463   48766 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0829 19:55:26.946468   48766 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0829 19:55:26.946472   48766 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0829 19:55:26.946478   48766 command_runner.go:130] > # 	"image_pulls_success_total",
	I0829 19:55:26.946482   48766 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0829 19:55:26.946486   48766 command_runner.go:130] > # 	"containers_oom_count_total",
	I0829 19:55:26.946491   48766 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0829 19:55:26.946497   48766 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0829 19:55:26.946501   48766 command_runner.go:130] > # ]
	I0829 19:55:26.946506   48766 command_runner.go:130] > # The port on which the metrics server will listen.
	I0829 19:55:26.946512   48766 command_runner.go:130] > # metrics_port = 9090
	I0829 19:55:26.946516   48766 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0829 19:55:26.946520   48766 command_runner.go:130] > # metrics_socket = ""
	I0829 19:55:26.946525   48766 command_runner.go:130] > # The certificate for the secure metrics server.
	I0829 19:55:26.946546   48766 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0829 19:55:26.946557   48766 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0829 19:55:26.946563   48766 command_runner.go:130] > # certificate on any modification event.
	I0829 19:55:26.946567   48766 command_runner.go:130] > # metrics_cert = ""
	I0829 19:55:26.946576   48766 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0829 19:55:26.946581   48766 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0829 19:55:26.946586   48766 command_runner.go:130] > # metrics_key = ""
	I0829 19:55:26.946592   48766 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0829 19:55:26.946598   48766 command_runner.go:130] > [crio.tracing]
	I0829 19:55:26.946603   48766 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0829 19:55:26.946610   48766 command_runner.go:130] > # enable_tracing = false
	I0829 19:55:26.946616   48766 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0829 19:55:26.946623   48766 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0829 19:55:26.946630   48766 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0829 19:55:26.946637   48766 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0829 19:55:26.946641   48766 command_runner.go:130] > # CRI-O NRI configuration.
	I0829 19:55:26.946644   48766 command_runner.go:130] > [crio.nri]
	I0829 19:55:26.946648   48766 command_runner.go:130] > # Globally enable or disable NRI.
	I0829 19:55:26.946656   48766 command_runner.go:130] > # enable_nri = false
	I0829 19:55:26.946662   48766 command_runner.go:130] > # NRI socket to listen on.
	I0829 19:55:26.946667   48766 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0829 19:55:26.946674   48766 command_runner.go:130] > # NRI plugin directory to use.
	I0829 19:55:26.946678   48766 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0829 19:55:26.946683   48766 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0829 19:55:26.946690   48766 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0829 19:55:26.946695   48766 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0829 19:55:26.946701   48766 command_runner.go:130] > # nri_disable_connections = false
	I0829 19:55:26.946708   48766 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0829 19:55:26.946715   48766 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0829 19:55:26.946720   48766 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0829 19:55:26.946725   48766 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0829 19:55:26.946730   48766 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0829 19:55:26.946734   48766 command_runner.go:130] > [crio.stats]
	I0829 19:55:26.946740   48766 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0829 19:55:26.946745   48766 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0829 19:55:26.946751   48766 command_runner.go:130] > # stats_collection_period = 0
	I0829 19:55:26.946864   48766 cni.go:84] Creating CNI manager for ""
	I0829 19:55:26.946874   48766 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0829 19:55:26.946885   48766 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:55:26.946904   48766 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.245 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-197790 NodeName:multinode-197790 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:55:26.947029   48766 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-197790"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:55:26.947085   48766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:55:26.958295   48766 command_runner.go:130] > kubeadm
	I0829 19:55:26.958316   48766 command_runner.go:130] > kubectl
	I0829 19:55:26.958322   48766 command_runner.go:130] > kubelet
	I0829 19:55:26.958427   48766 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:55:26.958485   48766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:55:26.968526   48766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0829 19:55:26.984977   48766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:55:27.001098   48766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0829 19:55:27.017278   48766 ssh_runner.go:195] Run: grep 192.168.39.245	control-plane.minikube.internal$ /etc/hosts
	I0829 19:55:27.020920   48766 command_runner.go:130] > 192.168.39.245	control-plane.minikube.internal
	I0829 19:55:27.020975   48766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:55:27.156038   48766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:55:27.171610   48766 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/multinode-197790 for IP: 192.168.39.245
	I0829 19:55:27.171632   48766 certs.go:194] generating shared ca certs ...
	I0829 19:55:27.171651   48766 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:55:27.171844   48766 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 19:55:27.171900   48766 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 19:55:27.171913   48766 certs.go:256] generating profile certs ...
	I0829 19:55:27.172006   48766 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/multinode-197790/client.key
	I0829 19:55:27.172086   48766 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/multinode-197790/apiserver.key.c28e1b40
	I0829 19:55:27.172129   48766 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/multinode-197790/proxy-client.key
	I0829 19:55:27.172140   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0829 19:55:27.172153   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0829 19:55:27.172170   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0829 19:55:27.172199   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0829 19:55:27.172218   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/multinode-197790/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0829 19:55:27.172236   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/multinode-197790/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0829 19:55:27.172254   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/multinode-197790/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0829 19:55:27.172270   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/multinode-197790/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0829 19:55:27.172334   48766 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 19:55:27.172372   48766 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 19:55:27.172383   48766 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 19:55:27.172411   48766 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 19:55:27.172435   48766 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:55:27.172459   48766 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 19:55:27.172497   48766 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 19:55:27.172525   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> /usr/share/ca-certificates/183612.pem
	I0829 19:55:27.172538   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:55:27.172550   48766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem -> /usr/share/ca-certificates/18361.pem
	I0829 19:55:27.173101   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:55:27.196724   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 19:55:27.219538   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:55:27.242466   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:55:27.266438   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/multinode-197790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0829 19:55:27.290051   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/multinode-197790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:55:27.312847   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/multinode-197790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:55:27.335624   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/multinode-197790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:55:27.360765   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 19:55:27.383755   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:55:27.409061   48766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 19:55:27.432699   48766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:55:27.449324   48766 ssh_runner.go:195] Run: openssl version
	I0829 19:55:27.455310   48766 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0829 19:55:27.455407   48766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 19:55:27.466161   48766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 19:55:27.470528   48766 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 19:55:27.470559   48766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 19:55:27.470608   48766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 19:55:27.475998   48766 command_runner.go:130] > 3ec20f2e
	I0829 19:55:27.476059   48766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:55:27.485202   48766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:55:27.495721   48766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:55:27.499799   48766 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:55:27.499926   48766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:55:27.499961   48766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:55:27.505205   48766 command_runner.go:130] > b5213941
	I0829 19:55:27.505473   48766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:55:27.514997   48766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 19:55:27.536644   48766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 19:55:27.546030   48766 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 19:55:27.553666   48766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 19:55:27.553726   48766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 19:55:27.571612   48766 command_runner.go:130] > 51391683
	I0829 19:55:27.571797   48766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 19:55:27.587671   48766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:55:27.594459   48766 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:55:27.594480   48766 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0829 19:55:27.594486   48766 command_runner.go:130] > Device: 253,1	Inode: 2103318     Links: 1
	I0829 19:55:27.594493   48766 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0829 19:55:27.594499   48766 command_runner.go:130] > Access: 2024-08-29 19:48:48.214499508 +0000
	I0829 19:55:27.594503   48766 command_runner.go:130] > Modify: 2024-08-29 19:48:48.214499508 +0000
	I0829 19:55:27.594508   48766 command_runner.go:130] > Change: 2024-08-29 19:48:48.214499508 +0000
	I0829 19:55:27.594512   48766 command_runner.go:130] >  Birth: 2024-08-29 19:48:48.214499508 +0000
	I0829 19:55:27.596111   48766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:55:27.604636   48766 command_runner.go:130] > Certificate will not expire
	I0829 19:55:27.605101   48766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:55:27.613528   48766 command_runner.go:130] > Certificate will not expire
	I0829 19:55:27.613695   48766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:55:27.619704   48766 command_runner.go:130] > Certificate will not expire
	I0829 19:55:27.619847   48766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:55:27.630126   48766 command_runner.go:130] > Certificate will not expire
	I0829 19:55:27.630326   48766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:55:27.637169   48766 command_runner.go:130] > Certificate will not expire
	I0829 19:55:27.637373   48766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:55:27.644412   48766 command_runner.go:130] > Certificate will not expire
	I0829 19:55:27.644477   48766 kubeadm.go:392] StartCluster: {Name:multinode-197790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-197790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.247 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.131 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:55:27.644592   48766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:55:27.644641   48766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:55:27.728001   48766 command_runner.go:130] > 6416a8f78390db4ac05922480ada0abfe0d1dbe01b32e2153f4cdef264b67a68
	I0829 19:55:27.728023   48766 command_runner.go:130] > fc1b7bdfd5ba3dea8deda18eb3dcf557e0c812d2070d008a379f28ded006055b
	I0829 19:55:27.728032   48766 command_runner.go:130] > 1a811445cc83f2e8b972591baff6b04e9f09f97ba210724a9d8f22c91989fa5d
	I0829 19:55:27.728055   48766 command_runner.go:130] > e676040af61f22a0c5fce59e3da10bb089a18f9ac5e399ff933823362e94206c
	I0829 19:55:27.728063   48766 command_runner.go:130] > 286aa4b9e2fe4ed29522efd5138da93f0d64acbe34bd5ee5d2b1cc688451cca2
	I0829 19:55:27.728072   48766 command_runner.go:130] > 3fa8c0876c238c0b2141bf4cf896f8c40699078db6f5e3d55a209627ea097d1e
	I0829 19:55:27.728080   48766 command_runner.go:130] > 4e91e07c89ec5477b7fdae0dc2885abb50515ad9a90d475f0a4a97b15ec8d337
	I0829 19:55:27.728094   48766 command_runner.go:130] > 8cce534dca4077aaa0eba62fee7ef0b41f7528d06ebd1c677a6741e47b378c6d
	I0829 19:55:27.728122   48766 cri.go:89] found id: "6416a8f78390db4ac05922480ada0abfe0d1dbe01b32e2153f4cdef264b67a68"
	I0829 19:55:27.728134   48766 cri.go:89] found id: "fc1b7bdfd5ba3dea8deda18eb3dcf557e0c812d2070d008a379f28ded006055b"
	I0829 19:55:27.728139   48766 cri.go:89] found id: "1a811445cc83f2e8b972591baff6b04e9f09f97ba210724a9d8f22c91989fa5d"
	I0829 19:55:27.728144   48766 cri.go:89] found id: "e676040af61f22a0c5fce59e3da10bb089a18f9ac5e399ff933823362e94206c"
	I0829 19:55:27.728149   48766 cri.go:89] found id: "286aa4b9e2fe4ed29522efd5138da93f0d64acbe34bd5ee5d2b1cc688451cca2"
	I0829 19:55:27.728156   48766 cri.go:89] found id: "3fa8c0876c238c0b2141bf4cf896f8c40699078db6f5e3d55a209627ea097d1e"
	I0829 19:55:27.728160   48766 cri.go:89] found id: "4e91e07c89ec5477b7fdae0dc2885abb50515ad9a90d475f0a4a97b15ec8d337"
	I0829 19:55:27.728164   48766 cri.go:89] found id: "8cce534dca4077aaa0eba62fee7ef0b41f7528d06ebd1c677a6741e47b378c6d"
	I0829 19:55:27.728169   48766 cri.go:89] found id: ""
	I0829 19:55:27.728207   48766 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 29 19:59:38 multinode-197790 crio[2740]: time="2024-08-29 19:59:38.439872273Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961578439849767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7233d677-05c6-43b1-84ef-8f7012849cd2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:59:38 multinode-197790 crio[2740]: time="2024-08-29 19:59:38.440505638Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a4bc765-c458-4334-ba84-1c96cc7c2037 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:59:38 multinode-197790 crio[2740]: time="2024-08-29 19:59:38.440566625Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a4bc765-c458-4334-ba84-1c96cc7c2037 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:59:38 multinode-197790 crio[2740]: time="2024-08-29 19:59:38.440995638Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:975d4818818c9de94a7844c12e15f30f5b49c551192ef5e12e28c9156df31b85,PodSandboxId:1180a709ee431eebd683e316be265db41b8e84c1903bf9ad9c4e5516298b45ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724961368242157908,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zglxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf1e6921-597a-4b85-873d-fa6578ac26a7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2286a2b13ba4fc4fb058428b09c9dbca4b3c192c351bfa2a21baeed268364dbb,PodSandboxId:4043b3181266550e7053efa329d8e3f1edd1aae9aa02b04d571da5903cb13699,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724961334681439917,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nbcg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b6fbd7-8314-4621-97b9-33e55ba5797f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9253d9c8d0ea7633c2f3277a98a78bf1fe87a553b56a63453809b5eb587a4af0,PodSandboxId:b360bedefdc4253fee7470b465085edbc8f8fe68b90d4191f4671cf8bb3c0c4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724961334618597859,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h6qz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d6839e-102f-4f62-a6bb-2973abdbfc39,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea9d28238a802bd92017d51fb4a7d86879632675ce30c2f03ef1f53599300697,PodSandboxId:5a345d3556a28af490ec205934f27dadfe48e56251400084e176b19f034bc1c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724961334554823068,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61145331-663c-404f-9c46-3eb3bc0cb49a,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f9d9f10866771e532eeb4564f1df6410e9ed5256f0527579efd3c90184a0824,PodSandboxId:2f04c9e7f0430c4b4f7bd123485344242d9a9fc2a0abba9595941976acbab6b1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724961334525986710,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xdb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd7f904d-28eb-4936-b01c-0f50d3d9c122,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e97593be7b5873f114cb6dd2ad4caf2b3c2260b0b81177da3b4994bb0e3bd0,PodSandboxId:fd424287e4cdf4554764a657b45f123a87ffe88b5e9e55b8b19c10e6ac55a86b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724961330776943964,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3047d2e2afb241cbe7dea32bac3d4e39,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1ca0678b248f6b562721904843ac20264ab8cd7b69b73c49d244b14fc3cdd2a,PodSandboxId:0bac65f2da1ad52d116318976f0b2b2730907a42b4d00366b03f19eefad2b17f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724961330747042460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d42c7ee029acec4350042255b5d743,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9550962f5646b8b1fbccdaafc3c325a4cea964f293e546e4400ed68b2d2c043,PodSandboxId:1cdef8ba76f16bbd7d19e5484b2ef8f8424e1ba38df69a210503054574bc2c3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724961330690182534,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 880f3a938902567970348e90e40686dd,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a456c840add35f908f85941763797193b2e5ee9b05c1e8c1a705ea1a443aa8f,PodSandboxId:27cc195a9cf0b2cd52c45e2f61eb7c24fd057f52101c135feebf4f10abfd9519,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724961330492106092,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f51f296d9b2ed8e98f1266232d7508c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bcbe264234acc65c19b7037165725259fa40f8dff1a5ae8b24e2fc8e3f6adc3,PodSandboxId:27cc195a9cf0b2cd52c45e2f61eb7c24fd057f52101c135feebf4f10abfd9519,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724961327640225575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f51f296d9b2ed8e98f1266232d7508c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a80093540b50a0d494482b179c42034ea5e85fd46b8832d41193499b85012a8,PodSandboxId:61d2696b65c89b335d77db0b7d0e6575a22f33365223f0a4c757ade2400dd3c7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724961011618006152,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zglxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf1e6921-597a-4b85-873d-fa6578ac26a7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6416a8f78390db4ac05922480ada0abfe0d1dbe01b32e2153f4cdef264b67a68,PodSandboxId:086d9afeac18d2806c03dd87f24bd2b5f41694e9d4c2e6fb392690ec54b30aa3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724960957883799326,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h6qz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d6839e-102f-4f62-a6bb-2973abdbfc39,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc1b7bdfd5ba3dea8deda18eb3dcf557e0c812d2070d008a379f28ded006055b,PodSandboxId:c4e2accec55d8288ed373c9ab8f492bf1229b5081140ab181efeef9a03b4e3ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724960957848074658,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
61145331-663c-404f-9c46-3eb3bc0cb49a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a811445cc83f2e8b972591baff6b04e9f09f97ba210724a9d8f22c91989fa5d,PodSandboxId:3cbc16c4c4a2b0b42b8700850cec14813eb874de5661b3bb63fc77f2e531ab75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724960945841971568,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nbcg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
0b6fbd7-8314-4621-97b9-33e55ba5797f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e676040af61f22a0c5fce59e3da10bb089a18f9ac5e399ff933823362e94206c,PodSandboxId:ee18d8fb6c071df6376f1a50bfd9ea969a44fe1fb25db3297b0af1f7d4c9aac5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724960943061173837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xdb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd7f904d-28eb-4936-b01c-0f50d3d9c12
2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:286aa4b9e2fe4ed29522efd5138da93f0d64acbe34bd5ee5d2b1cc688451cca2,PodSandboxId:3477e9724991a2f13d4d8f17c591e732e1d458f3ead28d3135abbfd25e33c6f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724960932246677560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d42c7ee029acec4350
042255b5d743,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e91e07c89ec5477b7fdae0dc2885abb50515ad9a90d475f0a4a97b15ec8d337,PodSandboxId:8ae6b3e132ab34030fa07e45d84cf44a206e3f7b3135a92ee6177fd643ed499e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724960932204997681,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3047d2e2afb241cbe7dea32bac3d4e39,},
Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cce534dca4077aaa0eba62fee7ef0b41f7528d06ebd1c677a6741e47b378c6d,PodSandboxId:f63365809437843caca233a45934c73628053b984d171c4bc7fd9ae43363a4bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960932121609997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 880f3a938902567970348e90e40686dd,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a4bc765-c458-4334-ba84-1c96cc7c2037 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:59:38 multinode-197790 crio[2740]: time="2024-08-29 19:59:38.482556773Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c7bc7867-e15e-4770-b0cb-4c40ab122833 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:59:38 multinode-197790 crio[2740]: time="2024-08-29 19:59:38.482627477Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c7bc7867-e15e-4770-b0cb-4c40ab122833 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:59:38 multinode-197790 crio[2740]: time="2024-08-29 19:59:38.483992084Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=153842e5-f178-4146-8355-3c2781272b2f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:59:38 multinode-197790 crio[2740]: time="2024-08-29 19:59:38.484670904Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961578484648084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=153842e5-f178-4146-8355-3c2781272b2f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:59:38 multinode-197790 crio[2740]: time="2024-08-29 19:59:38.485176973Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a6455fa-96aa-4b00-957c-7db51556edd7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:59:38 multinode-197790 crio[2740]: time="2024-08-29 19:59:38.485250508Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a6455fa-96aa-4b00-957c-7db51556edd7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:59:38 multinode-197790 crio[2740]: time="2024-08-29 19:59:38.485637146Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:975d4818818c9de94a7844c12e15f30f5b49c551192ef5e12e28c9156df31b85,PodSandboxId:1180a709ee431eebd683e316be265db41b8e84c1903bf9ad9c4e5516298b45ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724961368242157908,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zglxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf1e6921-597a-4b85-873d-fa6578ac26a7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2286a2b13ba4fc4fb058428b09c9dbca4b3c192c351bfa2a21baeed268364dbb,PodSandboxId:4043b3181266550e7053efa329d8e3f1edd1aae9aa02b04d571da5903cb13699,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724961334681439917,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nbcg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b6fbd7-8314-4621-97b9-33e55ba5797f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9253d9c8d0ea7633c2f3277a98a78bf1fe87a553b56a63453809b5eb587a4af0,PodSandboxId:b360bedefdc4253fee7470b465085edbc8f8fe68b90d4191f4671cf8bb3c0c4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724961334618597859,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h6qz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d6839e-102f-4f62-a6bb-2973abdbfc39,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea9d28238a802bd92017d51fb4a7d86879632675ce30c2f03ef1f53599300697,PodSandboxId:5a345d3556a28af490ec205934f27dadfe48e56251400084e176b19f034bc1c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724961334554823068,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61145331-663c-404f-9c46-3eb3bc0cb49a,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f9d9f10866771e532eeb4564f1df6410e9ed5256f0527579efd3c90184a0824,PodSandboxId:2f04c9e7f0430c4b4f7bd123485344242d9a9fc2a0abba9595941976acbab6b1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724961334525986710,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xdb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd7f904d-28eb-4936-b01c-0f50d3d9c122,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e97593be7b5873f114cb6dd2ad4caf2b3c2260b0b81177da3b4994bb0e3bd0,PodSandboxId:fd424287e4cdf4554764a657b45f123a87ffe88b5e9e55b8b19c10e6ac55a86b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724961330776943964,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3047d2e2afb241cbe7dea32bac3d4e39,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1ca0678b248f6b562721904843ac20264ab8cd7b69b73c49d244b14fc3cdd2a,PodSandboxId:0bac65f2da1ad52d116318976f0b2b2730907a42b4d00366b03f19eefad2b17f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724961330747042460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d42c7ee029acec4350042255b5d743,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9550962f5646b8b1fbccdaafc3c325a4cea964f293e546e4400ed68b2d2c043,PodSandboxId:1cdef8ba76f16bbd7d19e5484b2ef8f8424e1ba38df69a210503054574bc2c3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724961330690182534,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 880f3a938902567970348e90e40686dd,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a456c840add35f908f85941763797193b2e5ee9b05c1e8c1a705ea1a443aa8f,PodSandboxId:27cc195a9cf0b2cd52c45e2f61eb7c24fd057f52101c135feebf4f10abfd9519,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724961330492106092,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f51f296d9b2ed8e98f1266232d7508c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bcbe264234acc65c19b7037165725259fa40f8dff1a5ae8b24e2fc8e3f6adc3,PodSandboxId:27cc195a9cf0b2cd52c45e2f61eb7c24fd057f52101c135feebf4f10abfd9519,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724961327640225575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f51f296d9b2ed8e98f1266232d7508c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a80093540b50a0d494482b179c42034ea5e85fd46b8832d41193499b85012a8,PodSandboxId:61d2696b65c89b335d77db0b7d0e6575a22f33365223f0a4c757ade2400dd3c7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724961011618006152,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zglxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf1e6921-597a-4b85-873d-fa6578ac26a7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6416a8f78390db4ac05922480ada0abfe0d1dbe01b32e2153f4cdef264b67a68,PodSandboxId:086d9afeac18d2806c03dd87f24bd2b5f41694e9d4c2e6fb392690ec54b30aa3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724960957883799326,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h6qz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d6839e-102f-4f62-a6bb-2973abdbfc39,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc1b7bdfd5ba3dea8deda18eb3dcf557e0c812d2070d008a379f28ded006055b,PodSandboxId:c4e2accec55d8288ed373c9ab8f492bf1229b5081140ab181efeef9a03b4e3ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724960957848074658,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
61145331-663c-404f-9c46-3eb3bc0cb49a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a811445cc83f2e8b972591baff6b04e9f09f97ba210724a9d8f22c91989fa5d,PodSandboxId:3cbc16c4c4a2b0b42b8700850cec14813eb874de5661b3bb63fc77f2e531ab75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724960945841971568,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nbcg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
0b6fbd7-8314-4621-97b9-33e55ba5797f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e676040af61f22a0c5fce59e3da10bb089a18f9ac5e399ff933823362e94206c,PodSandboxId:ee18d8fb6c071df6376f1a50bfd9ea969a44fe1fb25db3297b0af1f7d4c9aac5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724960943061173837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xdb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd7f904d-28eb-4936-b01c-0f50d3d9c12
2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:286aa4b9e2fe4ed29522efd5138da93f0d64acbe34bd5ee5d2b1cc688451cca2,PodSandboxId:3477e9724991a2f13d4d8f17c591e732e1d458f3ead28d3135abbfd25e33c6f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724960932246677560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d42c7ee029acec4350
042255b5d743,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e91e07c89ec5477b7fdae0dc2885abb50515ad9a90d475f0a4a97b15ec8d337,PodSandboxId:8ae6b3e132ab34030fa07e45d84cf44a206e3f7b3135a92ee6177fd643ed499e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724960932204997681,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3047d2e2afb241cbe7dea32bac3d4e39,},
Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cce534dca4077aaa0eba62fee7ef0b41f7528d06ebd1c677a6741e47b378c6d,PodSandboxId:f63365809437843caca233a45934c73628053b984d171c4bc7fd9ae43363a4bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960932121609997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 880f3a938902567970348e90e40686dd,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a6455fa-96aa-4b00-957c-7db51556edd7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:59:38 multinode-197790 crio[2740]: time="2024-08-29 19:59:38.526131441Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7a68cea2-63a0-4fec-8910-5791902b3f34 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:59:38 multinode-197790 crio[2740]: time="2024-08-29 19:59:38.526219649Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a68cea2-63a0-4fec-8910-5791902b3f34 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:59:38 multinode-197790 crio[2740]: time="2024-08-29 19:59:38.527149814Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3c2c30d2-6a3f-45b1-b445-19c54bdc4ce9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:59:38 multinode-197790 crio[2740]: time="2024-08-29 19:59:38.527675018Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961578527645624,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3c2c30d2-6a3f-45b1-b445-19c54bdc4ce9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:59:38 multinode-197790 crio[2740]: time="2024-08-29 19:59:38.534177619Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=158d00f4-1fa0-493f-804f-2e6dbe0d9165 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:59:38 multinode-197790 crio[2740]: time="2024-08-29 19:59:38.534257488Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=158d00f4-1fa0-493f-804f-2e6dbe0d9165 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:59:38 multinode-197790 crio[2740]: time="2024-08-29 19:59:38.534600066Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:975d4818818c9de94a7844c12e15f30f5b49c551192ef5e12e28c9156df31b85,PodSandboxId:1180a709ee431eebd683e316be265db41b8e84c1903bf9ad9c4e5516298b45ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724961368242157908,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zglxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf1e6921-597a-4b85-873d-fa6578ac26a7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2286a2b13ba4fc4fb058428b09c9dbca4b3c192c351bfa2a21baeed268364dbb,PodSandboxId:4043b3181266550e7053efa329d8e3f1edd1aae9aa02b04d571da5903cb13699,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724961334681439917,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nbcg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b6fbd7-8314-4621-97b9-33e55ba5797f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9253d9c8d0ea7633c2f3277a98a78bf1fe87a553b56a63453809b5eb587a4af0,PodSandboxId:b360bedefdc4253fee7470b465085edbc8f8fe68b90d4191f4671cf8bb3c0c4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724961334618597859,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h6qz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d6839e-102f-4f62-a6bb-2973abdbfc39,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea9d28238a802bd92017d51fb4a7d86879632675ce30c2f03ef1f53599300697,PodSandboxId:5a345d3556a28af490ec205934f27dadfe48e56251400084e176b19f034bc1c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724961334554823068,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61145331-663c-404f-9c46-3eb3bc0cb49a,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f9d9f10866771e532eeb4564f1df6410e9ed5256f0527579efd3c90184a0824,PodSandboxId:2f04c9e7f0430c4b4f7bd123485344242d9a9fc2a0abba9595941976acbab6b1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724961334525986710,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xdb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd7f904d-28eb-4936-b01c-0f50d3d9c122,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e97593be7b5873f114cb6dd2ad4caf2b3c2260b0b81177da3b4994bb0e3bd0,PodSandboxId:fd424287e4cdf4554764a657b45f123a87ffe88b5e9e55b8b19c10e6ac55a86b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724961330776943964,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3047d2e2afb241cbe7dea32bac3d4e39,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1ca0678b248f6b562721904843ac20264ab8cd7b69b73c49d244b14fc3cdd2a,PodSandboxId:0bac65f2da1ad52d116318976f0b2b2730907a42b4d00366b03f19eefad2b17f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724961330747042460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d42c7ee029acec4350042255b5d743,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9550962f5646b8b1fbccdaafc3c325a4cea964f293e546e4400ed68b2d2c043,PodSandboxId:1cdef8ba76f16bbd7d19e5484b2ef8f8424e1ba38df69a210503054574bc2c3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724961330690182534,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 880f3a938902567970348e90e40686dd,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a456c840add35f908f85941763797193b2e5ee9b05c1e8c1a705ea1a443aa8f,PodSandboxId:27cc195a9cf0b2cd52c45e2f61eb7c24fd057f52101c135feebf4f10abfd9519,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724961330492106092,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f51f296d9b2ed8e98f1266232d7508c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bcbe264234acc65c19b7037165725259fa40f8dff1a5ae8b24e2fc8e3f6adc3,PodSandboxId:27cc195a9cf0b2cd52c45e2f61eb7c24fd057f52101c135feebf4f10abfd9519,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724961327640225575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f51f296d9b2ed8e98f1266232d7508c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a80093540b50a0d494482b179c42034ea5e85fd46b8832d41193499b85012a8,PodSandboxId:61d2696b65c89b335d77db0b7d0e6575a22f33365223f0a4c757ade2400dd3c7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724961011618006152,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zglxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf1e6921-597a-4b85-873d-fa6578ac26a7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6416a8f78390db4ac05922480ada0abfe0d1dbe01b32e2153f4cdef264b67a68,PodSandboxId:086d9afeac18d2806c03dd87f24bd2b5f41694e9d4c2e6fb392690ec54b30aa3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724960957883799326,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h6qz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d6839e-102f-4f62-a6bb-2973abdbfc39,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc1b7bdfd5ba3dea8deda18eb3dcf557e0c812d2070d008a379f28ded006055b,PodSandboxId:c4e2accec55d8288ed373c9ab8f492bf1229b5081140ab181efeef9a03b4e3ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724960957848074658,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
61145331-663c-404f-9c46-3eb3bc0cb49a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a811445cc83f2e8b972591baff6b04e9f09f97ba210724a9d8f22c91989fa5d,PodSandboxId:3cbc16c4c4a2b0b42b8700850cec14813eb874de5661b3bb63fc77f2e531ab75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724960945841971568,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nbcg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
0b6fbd7-8314-4621-97b9-33e55ba5797f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e676040af61f22a0c5fce59e3da10bb089a18f9ac5e399ff933823362e94206c,PodSandboxId:ee18d8fb6c071df6376f1a50bfd9ea969a44fe1fb25db3297b0af1f7d4c9aac5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724960943061173837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xdb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd7f904d-28eb-4936-b01c-0f50d3d9c12
2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:286aa4b9e2fe4ed29522efd5138da93f0d64acbe34bd5ee5d2b1cc688451cca2,PodSandboxId:3477e9724991a2f13d4d8f17c591e732e1d458f3ead28d3135abbfd25e33c6f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724960932246677560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d42c7ee029acec4350
042255b5d743,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e91e07c89ec5477b7fdae0dc2885abb50515ad9a90d475f0a4a97b15ec8d337,PodSandboxId:8ae6b3e132ab34030fa07e45d84cf44a206e3f7b3135a92ee6177fd643ed499e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724960932204997681,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3047d2e2afb241cbe7dea32bac3d4e39,},
Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cce534dca4077aaa0eba62fee7ef0b41f7528d06ebd1c677a6741e47b378c6d,PodSandboxId:f63365809437843caca233a45934c73628053b984d171c4bc7fd9ae43363a4bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960932121609997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 880f3a938902567970348e90e40686dd,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=158d00f4-1fa0-493f-804f-2e6dbe0d9165 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:59:38 multinode-197790 crio[2740]: time="2024-08-29 19:59:38.583748942Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=57185403-72ad-41d4-b44e-a20ad8a528ad name=/runtime.v1.RuntimeService/Version
	Aug 29 19:59:38 multinode-197790 crio[2740]: time="2024-08-29 19:59:38.583822202Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=57185403-72ad-41d4-b44e-a20ad8a528ad name=/runtime.v1.RuntimeService/Version
	Aug 29 19:59:38 multinode-197790 crio[2740]: time="2024-08-29 19:59:38.585419491Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e6bae4ae-dbfd-4f05-aca1-22845f8148da name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:59:38 multinode-197790 crio[2740]: time="2024-08-29 19:59:38.585938545Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961578585910859,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e6bae4ae-dbfd-4f05-aca1-22845f8148da name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:59:38 multinode-197790 crio[2740]: time="2024-08-29 19:59:38.586413103Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8fab1c79-a67f-4f9c-8a65-7bc3fdb43671 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:59:38 multinode-197790 crio[2740]: time="2024-08-29 19:59:38.586470197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8fab1c79-a67f-4f9c-8a65-7bc3fdb43671 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:59:38 multinode-197790 crio[2740]: time="2024-08-29 19:59:38.586972087Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:975d4818818c9de94a7844c12e15f30f5b49c551192ef5e12e28c9156df31b85,PodSandboxId:1180a709ee431eebd683e316be265db41b8e84c1903bf9ad9c4e5516298b45ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724961368242157908,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zglxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf1e6921-597a-4b85-873d-fa6578ac26a7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2286a2b13ba4fc4fb058428b09c9dbca4b3c192c351bfa2a21baeed268364dbb,PodSandboxId:4043b3181266550e7053efa329d8e3f1edd1aae9aa02b04d571da5903cb13699,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724961334681439917,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nbcg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b6fbd7-8314-4621-97b9-33e55ba5797f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9253d9c8d0ea7633c2f3277a98a78bf1fe87a553b56a63453809b5eb587a4af0,PodSandboxId:b360bedefdc4253fee7470b465085edbc8f8fe68b90d4191f4671cf8bb3c0c4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724961334618597859,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h6qz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d6839e-102f-4f62-a6bb-2973abdbfc39,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea9d28238a802bd92017d51fb4a7d86879632675ce30c2f03ef1f53599300697,PodSandboxId:5a345d3556a28af490ec205934f27dadfe48e56251400084e176b19f034bc1c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724961334554823068,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61145331-663c-404f-9c46-3eb3bc0cb49a,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f9d9f10866771e532eeb4564f1df6410e9ed5256f0527579efd3c90184a0824,PodSandboxId:2f04c9e7f0430c4b4f7bd123485344242d9a9fc2a0abba9595941976acbab6b1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724961334525986710,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xdb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd7f904d-28eb-4936-b01c-0f50d3d9c122,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e97593be7b5873f114cb6dd2ad4caf2b3c2260b0b81177da3b4994bb0e3bd0,PodSandboxId:fd424287e4cdf4554764a657b45f123a87ffe88b5e9e55b8b19c10e6ac55a86b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724961330776943964,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3047d2e2afb241cbe7dea32bac3d4e39,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1ca0678b248f6b562721904843ac20264ab8cd7b69b73c49d244b14fc3cdd2a,PodSandboxId:0bac65f2da1ad52d116318976f0b2b2730907a42b4d00366b03f19eefad2b17f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724961330747042460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d42c7ee029acec4350042255b5d743,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9550962f5646b8b1fbccdaafc3c325a4cea964f293e546e4400ed68b2d2c043,PodSandboxId:1cdef8ba76f16bbd7d19e5484b2ef8f8424e1ba38df69a210503054574bc2c3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724961330690182534,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 880f3a938902567970348e90e40686dd,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a456c840add35f908f85941763797193b2e5ee9b05c1e8c1a705ea1a443aa8f,PodSandboxId:27cc195a9cf0b2cd52c45e2f61eb7c24fd057f52101c135feebf4f10abfd9519,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724961330492106092,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f51f296d9b2ed8e98f1266232d7508c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bcbe264234acc65c19b7037165725259fa40f8dff1a5ae8b24e2fc8e3f6adc3,PodSandboxId:27cc195a9cf0b2cd52c45e2f61eb7c24fd057f52101c135feebf4f10abfd9519,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724961327640225575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f51f296d9b2ed8e98f1266232d7508c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a80093540b50a0d494482b179c42034ea5e85fd46b8832d41193499b85012a8,PodSandboxId:61d2696b65c89b335d77db0b7d0e6575a22f33365223f0a4c757ade2400dd3c7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724961011618006152,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zglxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf1e6921-597a-4b85-873d-fa6578ac26a7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6416a8f78390db4ac05922480ada0abfe0d1dbe01b32e2153f4cdef264b67a68,PodSandboxId:086d9afeac18d2806c03dd87f24bd2b5f41694e9d4c2e6fb392690ec54b30aa3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724960957883799326,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h6qz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d6839e-102f-4f62-a6bb-2973abdbfc39,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc1b7bdfd5ba3dea8deda18eb3dcf557e0c812d2070d008a379f28ded006055b,PodSandboxId:c4e2accec55d8288ed373c9ab8f492bf1229b5081140ab181efeef9a03b4e3ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724960957848074658,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
61145331-663c-404f-9c46-3eb3bc0cb49a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a811445cc83f2e8b972591baff6b04e9f09f97ba210724a9d8f22c91989fa5d,PodSandboxId:3cbc16c4c4a2b0b42b8700850cec14813eb874de5661b3bb63fc77f2e531ab75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724960945841971568,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nbcg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
0b6fbd7-8314-4621-97b9-33e55ba5797f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e676040af61f22a0c5fce59e3da10bb089a18f9ac5e399ff933823362e94206c,PodSandboxId:ee18d8fb6c071df6376f1a50bfd9ea969a44fe1fb25db3297b0af1f7d4c9aac5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724960943061173837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xdb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd7f904d-28eb-4936-b01c-0f50d3d9c12
2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:286aa4b9e2fe4ed29522efd5138da93f0d64acbe34bd5ee5d2b1cc688451cca2,PodSandboxId:3477e9724991a2f13d4d8f17c591e732e1d458f3ead28d3135abbfd25e33c6f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724960932246677560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d42c7ee029acec4350
042255b5d743,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e91e07c89ec5477b7fdae0dc2885abb50515ad9a90d475f0a4a97b15ec8d337,PodSandboxId:8ae6b3e132ab34030fa07e45d84cf44a206e3f7b3135a92ee6177fd643ed499e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724960932204997681,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3047d2e2afb241cbe7dea32bac3d4e39,},
Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cce534dca4077aaa0eba62fee7ef0b41f7528d06ebd1c677a6741e47b378c6d,PodSandboxId:f63365809437843caca233a45934c73628053b984d171c4bc7fd9ae43363a4bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960932121609997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-197790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 880f3a938902567970348e90e40686dd,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8fab1c79-a67f-4f9c-8a65-7bc3fdb43671 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	975d4818818c9       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   1180a709ee431       busybox-7dff88458-zglxg
	2286a2b13ba4f       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   4043b31812665       kindnet-nbcg8
	9253d9c8d0ea7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   b360bedefdc42       coredns-6f6b679f8f-h6qz7
	ea9d28238a802       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   5a345d3556a28       storage-provisioner
	5f9d9f1086677       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      4 minutes ago       Running             kube-proxy                1                   2f04c9e7f0430       kube-proxy-4xdb6
	b2e97593be7b5       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      4 minutes ago       Running             kube-scheduler            1                   fd424287e4cdf       kube-scheduler-multinode-197790
	d1ca0678b248f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   1                   0bac65f2da1ad       kube-controller-manager-multinode-197790
	e9550962f5646       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            1                   1cdef8ba76f16       kube-apiserver-multinode-197790
	3a456c840add3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      2                   27cc195a9cf0b       etcd-multinode-197790
	7bcbe264234ac       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Exited              etcd                      1                   27cc195a9cf0b       etcd-multinode-197790
	7a80093540b50       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   61d2696b65c89       busybox-7dff88458-zglxg
	6416a8f78390d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   086d9afeac18d       coredns-6f6b679f8f-h6qz7
	fc1b7bdfd5ba3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   c4e2accec55d8       storage-provisioner
	1a811445cc83f       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    10 minutes ago      Exited              kindnet-cni               0                   3cbc16c4c4a2b       kindnet-nbcg8
	e676040af61f2       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      10 minutes ago      Exited              kube-proxy                0                   ee18d8fb6c071       kube-proxy-4xdb6
	286aa4b9e2fe4       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      10 minutes ago      Exited              kube-controller-manager   0                   3477e9724991a       kube-controller-manager-multinode-197790
	4e91e07c89ec5       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      10 minutes ago      Exited              kube-scheduler            0                   8ae6b3e132ab3       kube-scheduler-multinode-197790
	8cce534dca407       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      10 minutes ago      Exited              kube-apiserver            0                   f633658094378       kube-apiserver-multinode-197790
	
	
	==> coredns [6416a8f78390db4ac05922480ada0abfe0d1dbe01b32e2153f4cdef264b67a68] <==
	[INFO] 10.244.0.3:42970 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00177583s
	[INFO] 10.244.0.3:36660 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000065799s
	[INFO] 10.244.0.3:42818 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000180078s
	[INFO] 10.244.0.3:56882 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001313741s
	[INFO] 10.244.0.3:37647 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064097s
	[INFO] 10.244.0.3:59045 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060341s
	[INFO] 10.244.0.3:60585 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072659s
	[INFO] 10.244.1.2:38661 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133921s
	[INFO] 10.244.1.2:36878 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116694s
	[INFO] 10.244.1.2:43938 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080797s
	[INFO] 10.244.1.2:51009 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008309s
	[INFO] 10.244.0.3:44212 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117287s
	[INFO] 10.244.0.3:52108 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000086192s
	[INFO] 10.244.0.3:39086 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061394s
	[INFO] 10.244.0.3:45833 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058547s
	[INFO] 10.244.1.2:40357 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117551s
	[INFO] 10.244.1.2:40753 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000165399s
	[INFO] 10.244.1.2:40951 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000142249s
	[INFO] 10.244.1.2:46464 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000402424s
	[INFO] 10.244.0.3:45893 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000095092s
	[INFO] 10.244.0.3:50987 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000038924s
	[INFO] 10.244.0.3:59934 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000046287s
	[INFO] 10.244.0.3:54998 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000035255s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9253d9c8d0ea7633c2f3277a98a78bf1fe87a553b56a63453809b5eb587a4af0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46610 - 29536 "HINFO IN 4964473242332634487.4415841945061080405. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024884836s
	
	
	==> describe nodes <==
	Name:               multinode-197790
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-197790
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=multinode-197790
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T19_48_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:48:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-197790
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:59:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:55:33 +0000   Thu, 29 Aug 2024 19:48:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:55:33 +0000   Thu, 29 Aug 2024 19:48:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:55:33 +0000   Thu, 29 Aug 2024 19:48:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:55:33 +0000   Thu, 29 Aug 2024 19:49:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.245
	  Hostname:    multinode-197790
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c8d670b0b9e84226b2a95a61cdccce2f
	  System UUID:                c8d670b0-b9e8-4226-b2a9-5a61cdccce2f
	  Boot ID:                    b1330017-d725-4ae9-bd6f-50f3ee070d30
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-zglxg                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m28s
	  kube-system                 coredns-6f6b679f8f-h6qz7                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-197790                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-nbcg8                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-197790             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-197790    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-4xdb6                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-197790             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m3s                 kube-proxy       
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-197790 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-197790 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-197790 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-197790 event: Registered Node multinode-197790 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-197790 status is now: NodeReady
	  Normal  Starting                 4m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m8s)  kubelet          Node multinode-197790 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m8s)  kubelet          Node multinode-197790 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m8s)  kubelet          Node multinode-197790 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                 node-controller  Node multinode-197790 event: Registered Node multinode-197790 in Controller
	
	
	Name:               multinode-197790-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-197790-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=multinode-197790
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_29T19_56_15_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:56:15 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-197790-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:57:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 29 Aug 2024 19:56:46 +0000   Thu, 29 Aug 2024 19:57:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 29 Aug 2024 19:56:46 +0000   Thu, 29 Aug 2024 19:57:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 29 Aug 2024 19:56:46 +0000   Thu, 29 Aug 2024 19:57:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 29 Aug 2024 19:56:46 +0000   Thu, 29 Aug 2024 19:57:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.247
	  Hostname:    multinode-197790-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad44b7f5b2694c2ca34a341f7e607d3b
	  System UUID:                ad44b7f5-b269-4c2c-a34a-341f7e607d3b
	  Boot ID:                    e4a90c07-34f5-485f-9f54-731dcb96caf5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cq9j4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m28s
	  kube-system                 kindnet-4rd99              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m50s
	  kube-system                 kube-proxy-s65hg           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m18s                  kube-proxy       
	  Normal  Starting                 9m45s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m50s (x2 over 9m51s)  kubelet          Node multinode-197790-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m50s (x2 over 9m51s)  kubelet          Node multinode-197790-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m50s (x2 over 9m51s)  kubelet          Node multinode-197790-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m30s                  kubelet          Node multinode-197790-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m23s (x2 over 3m23s)  kubelet          Node multinode-197790-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m23s (x2 over 3m23s)  kubelet          Node multinode-197790-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m23s (x2 over 3m23s)  kubelet          Node multinode-197790-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m4s                   kubelet          Node multinode-197790-m02 status is now: NodeReady
	  Normal  NodeNotReady             101s                   node-controller  Node multinode-197790-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.058599] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059415] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.187761] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.126228] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.297147] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +3.881115] systemd-fstab-generator[740]: Ignoring "noauto" option for root device
	[  +4.465607] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.060597] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.983475] systemd-fstab-generator[1215]: Ignoring "noauto" option for root device
	[  +0.078083] kauditd_printk_skb: 69 callbacks suppressed
	[Aug29 19:49] systemd-fstab-generator[1331]: Ignoring "noauto" option for root device
	[  +0.110663] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.814796] kauditd_printk_skb: 60 callbacks suppressed
	[Aug29 19:50] kauditd_printk_skb: 12 callbacks suppressed
	[Aug29 19:55] systemd-fstab-generator[2666]: Ignoring "noauto" option for root device
	[  +0.140899] systemd-fstab-generator[2678]: Ignoring "noauto" option for root device
	[  +0.164421] systemd-fstab-generator[2692]: Ignoring "noauto" option for root device
	[  +0.144738] systemd-fstab-generator[2704]: Ignoring "noauto" option for root device
	[  +0.267430] systemd-fstab-generator[2732]: Ignoring "noauto" option for root device
	[  +0.666252] systemd-fstab-generator[2827]: Ignoring "noauto" option for root device
	[  +2.738623] systemd-fstab-generator[3037]: Ignoring "noauto" option for root device
	[  +0.972157] kauditd_printk_skb: 176 callbacks suppressed
	[  +6.490024] kauditd_printk_skb: 47 callbacks suppressed
	[ +12.562968] systemd-fstab-generator[3863]: Ignoring "noauto" option for root device
	[Aug29 19:56] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [3a456c840add35f908f85941763797193b2e5ee9b05c1e8c1a705ea1a443aa8f] <==
	{"level":"info","ts":"2024-08-29T19:55:31.038401Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8f5341249654324","local-member-id":"c66b2a9605a64cb6","added-peer-id":"c66b2a9605a64cb6","added-peer-peer-urls":["https://192.168.39.245:2380"]}
	{"level":"info","ts":"2024-08-29T19:55:31.038486Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8f5341249654324","local-member-id":"c66b2a9605a64cb6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:55:31.038511Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:55:31.045832Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:55:31.048420Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-29T19:55:31.048849Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"c66b2a9605a64cb6","initial-advertise-peer-urls":["https://192.168.39.245:2380"],"listen-peer-urls":["https://192.168.39.245:2380"],"advertise-client-urls":["https://192.168.39.245:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.245:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-29T19:55:31.048553Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.245:2380"}
	{"level":"info","ts":"2024-08-29T19:55:31.049695Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.245:2380"}
	{"level":"info","ts":"2024-08-29T19:55:31.049597Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-29T19:55:32.371009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-29T19:55:32.371168Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-29T19:55:32.371214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 received MsgPreVoteResp from c66b2a9605a64cb6 at term 2"}
	{"level":"info","ts":"2024-08-29T19:55:32.371260Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 became candidate at term 3"}
	{"level":"info","ts":"2024-08-29T19:55:32.371288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 received MsgVoteResp from c66b2a9605a64cb6 at term 3"}
	{"level":"info","ts":"2024-08-29T19:55:32.371322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 became leader at term 3"}
	{"level":"info","ts":"2024-08-29T19:55:32.371354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c66b2a9605a64cb6 elected leader c66b2a9605a64cb6 at term 3"}
	{"level":"info","ts":"2024-08-29T19:55:32.373962Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c66b2a9605a64cb6","local-member-attributes":"{Name:multinode-197790 ClientURLs:[https://192.168.39.245:2379]}","request-path":"/0/members/c66b2a9605a64cb6/attributes","cluster-id":"8f5341249654324","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-29T19:55:32.374057Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T19:55:32.374189Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-29T19:55:32.374226Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-29T19:55:32.374354Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T19:55:32.375592Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:55:32.376465Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.245:2379"}
	{"level":"info","ts":"2024-08-29T19:55:32.377400Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:55:32.378238Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [7bcbe264234acc65c19b7037165725259fa40f8dff1a5ae8b24e2fc8e3f6adc3] <==
	{"level":"info","ts":"2024-08-29T19:55:27.811683Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-08-29T19:55:27.818559Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"8f5341249654324","local-member-id":"c66b2a9605a64cb6","commit-index":929}
	{"level":"info","ts":"2024-08-29T19:55:27.818753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 switched to configuration voters=()"}
	{"level":"info","ts":"2024-08-29T19:55:27.818814Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 became follower at term 2"}
	{"level":"info","ts":"2024-08-29T19:55:27.818850Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft c66b2a9605a64cb6 [peers: [], term: 2, commit: 929, applied: 0, lastindex: 929, lastterm: 2]"}
	{"level":"warn","ts":"2024-08-29T19:55:27.820336Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-08-29T19:55:27.830928Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":854}
	{"level":"info","ts":"2024-08-29T19:55:27.832992Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-08-29T19:55:27.835815Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"c66b2a9605a64cb6","timeout":"7s"}
	{"level":"info","ts":"2024-08-29T19:55:27.836267Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"c66b2a9605a64cb6"}
	{"level":"info","ts":"2024-08-29T19:55:27.836332Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"c66b2a9605a64cb6","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-29T19:55:27.836575Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-29T19:55:27.836770Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-29T19:55:27.836827Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-29T19:55:27.836874Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-29T19:55:27.837431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c66b2a9605a64cb6 switched to configuration voters=(14297568265846017206)"}
	{"level":"info","ts":"2024-08-29T19:55:27.837509Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8f5341249654324","local-member-id":"c66b2a9605a64cb6","added-peer-id":"c66b2a9605a64cb6","added-peer-peer-urls":["https://192.168.39.245:2380"]}
	{"level":"info","ts":"2024-08-29T19:55:27.837643Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8f5341249654324","local-member-id":"c66b2a9605a64cb6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:55:27.837683Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:55:27.842309Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:55:27.844378Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-29T19:55:27.844607Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"c66b2a9605a64cb6","initial-advertise-peer-urls":["https://192.168.39.245:2380"],"listen-peer-urls":["https://192.168.39.245:2380"],"advertise-client-urls":["https://192.168.39.245:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.245:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-29T19:55:27.844645Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-29T19:55:27.845995Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.245:2380"}
	{"level":"info","ts":"2024-08-29T19:55:27.846047Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.245:2380"}
	
	
	==> kernel <==
	 19:59:39 up 11 min,  0 users,  load average: 0.40, 0.30, 0.18
	Linux multinode-197790 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1a811445cc83f2e8b972591baff6b04e9f09f97ba210724a9d8f22c91989fa5d] <==
	I0829 19:53:06.968608       1 main.go:322] Node multinode-197790-m02 has CIDR [10.244.1.0/24] 
	I0829 19:53:16.968556       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0829 19:53:16.968843       1 main.go:299] handling current node
	I0829 19:53:16.968887       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0829 19:53:16.968897       1 main.go:322] Node multinode-197790-m02 has CIDR [10.244.1.0/24] 
	I0829 19:53:16.969188       1 main.go:295] Handling node with IPs: map[192.168.39.131:{}]
	I0829 19:53:16.969216       1 main.go:322] Node multinode-197790-m03 has CIDR [10.244.3.0/24] 
	I0829 19:53:26.973778       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0829 19:53:26.973849       1 main.go:299] handling current node
	I0829 19:53:26.973873       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0829 19:53:26.973879       1 main.go:322] Node multinode-197790-m02 has CIDR [10.244.1.0/24] 
	I0829 19:53:26.974087       1 main.go:295] Handling node with IPs: map[192.168.39.131:{}]
	I0829 19:53:26.974113       1 main.go:322] Node multinode-197790-m03 has CIDR [10.244.3.0/24] 
	I0829 19:53:36.974005       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0829 19:53:36.974122       1 main.go:299] handling current node
	I0829 19:53:36.974163       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0829 19:53:36.974182       1 main.go:322] Node multinode-197790-m02 has CIDR [10.244.1.0/24] 
	I0829 19:53:36.974363       1 main.go:295] Handling node with IPs: map[192.168.39.131:{}]
	I0829 19:53:36.974420       1 main.go:322] Node multinode-197790-m03 has CIDR [10.244.3.0/24] 
	I0829 19:53:46.975231       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0829 19:53:46.975396       1 main.go:322] Node multinode-197790-m02 has CIDR [10.244.1.0/24] 
	I0829 19:53:46.975577       1 main.go:295] Handling node with IPs: map[192.168.39.131:{}]
	I0829 19:53:46.975639       1 main.go:322] Node multinode-197790-m03 has CIDR [10.244.3.0/24] 
	I0829 19:53:46.975862       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0829 19:53:46.975899       1 main.go:299] handling current node
	
	
	==> kindnet [2286a2b13ba4fc4fb058428b09c9dbca4b3c192c351bfa2a21baeed268364dbb] <==
	I0829 19:58:35.692184       1 main.go:322] Node multinode-197790-m02 has CIDR [10.244.1.0/24] 
	I0829 19:58:45.692047       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0829 19:58:45.692156       1 main.go:299] handling current node
	I0829 19:58:45.692180       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0829 19:58:45.692192       1 main.go:322] Node multinode-197790-m02 has CIDR [10.244.1.0/24] 
	I0829 19:58:55.697666       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0829 19:58:55.697798       1 main.go:299] handling current node
	I0829 19:58:55.697819       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0829 19:58:55.697825       1 main.go:322] Node multinode-197790-m02 has CIDR [10.244.1.0/24] 
	I0829 19:59:05.694850       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0829 19:59:05.694925       1 main.go:322] Node multinode-197790-m02 has CIDR [10.244.1.0/24] 
	I0829 19:59:05.695102       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0829 19:59:05.695129       1 main.go:299] handling current node
	I0829 19:59:15.694418       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0829 19:59:15.694453       1 main.go:299] handling current node
	I0829 19:59:15.694467       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0829 19:59:15.694471       1 main.go:322] Node multinode-197790-m02 has CIDR [10.244.1.0/24] 
	I0829 19:59:25.694564       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0829 19:59:25.694616       1 main.go:299] handling current node
	I0829 19:59:25.694643       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0829 19:59:25.694648       1 main.go:322] Node multinode-197790-m02 has CIDR [10.244.1.0/24] 
	I0829 19:59:35.692507       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0829 19:59:35.692550       1 main.go:299] handling current node
	I0829 19:59:35.692596       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0829 19:59:35.692601       1 main.go:322] Node multinode-197790-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [8cce534dca4077aaa0eba62fee7ef0b41f7528d06ebd1c677a6741e47b378c6d] <==
	I0829 19:48:55.671430       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0829 19:48:56.260885       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0829 19:48:56.303320       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0829 19:48:56.384905       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0829 19:48:56.391683       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.245]
	I0829 19:48:56.392769       1 controller.go:615] quota admission added evaluator for: endpoints
	I0829 19:48:56.398213       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0829 19:48:56.743308       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0829 19:48:57.384866       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0829 19:48:57.398378       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0829 19:48:57.408935       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0829 19:49:02.094992       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0829 19:49:02.444128       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0829 19:50:13.045360       1 conn.go:339] Error on socket receive: read tcp 192.168.39.245:8443->192.168.39.1:33116: use of closed network connection
	E0829 19:50:13.221965       1 conn.go:339] Error on socket receive: read tcp 192.168.39.245:8443->192.168.39.1:33128: use of closed network connection
	E0829 19:50:13.420051       1 conn.go:339] Error on socket receive: read tcp 192.168.39.245:8443->192.168.39.1:33144: use of closed network connection
	E0829 19:50:13.588377       1 conn.go:339] Error on socket receive: read tcp 192.168.39.245:8443->192.168.39.1:33176: use of closed network connection
	E0829 19:50:13.750652       1 conn.go:339] Error on socket receive: read tcp 192.168.39.245:8443->192.168.39.1:33194: use of closed network connection
	E0829 19:50:13.912308       1 conn.go:339] Error on socket receive: read tcp 192.168.39.245:8443->192.168.39.1:33208: use of closed network connection
	E0829 19:50:14.182567       1 conn.go:339] Error on socket receive: read tcp 192.168.39.245:8443->192.168.39.1:33232: use of closed network connection
	E0829 19:50:14.350360       1 conn.go:339] Error on socket receive: read tcp 192.168.39.245:8443->192.168.39.1:33248: use of closed network connection
	E0829 19:50:14.510925       1 conn.go:339] Error on socket receive: read tcp 192.168.39.245:8443->192.168.39.1:33258: use of closed network connection
	E0829 19:50:14.674523       1 conn.go:339] Error on socket receive: read tcp 192.168.39.245:8443->192.168.39.1:33274: use of closed network connection
	I0829 19:53:54.448122       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0829 19:53:54.471287       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e9550962f5646b8b1fbccdaafc3c325a4cea964f293e546e4400ed68b2d2c043] <==
	I0829 19:55:33.710219       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0829 19:55:33.710265       1 policy_source.go:224] refreshing policies
	I0829 19:55:33.734977       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0829 19:55:33.785052       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0829 19:55:33.785331       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0829 19:55:33.786513       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0829 19:55:33.786526       1 shared_informer.go:320] Caches are synced for configmaps
	I0829 19:55:33.786622       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0829 19:55:33.788752       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0829 19:55:33.788783       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0829 19:55:33.792185       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0829 19:55:33.792297       1 aggregator.go:171] initial CRD sync complete...
	I0829 19:55:33.792331       1 autoregister_controller.go:144] Starting autoregister controller
	I0829 19:55:33.792336       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0829 19:55:33.792341       1 cache.go:39] Caches are synced for autoregister controller
	I0829 19:55:33.792927       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0829 19:55:33.801915       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0829 19:55:34.603039       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0829 19:55:35.614289       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0829 19:55:35.767110       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0829 19:55:35.779264       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0829 19:55:35.858322       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0829 19:55:35.864567       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0829 19:55:37.183092       1 controller.go:615] quota admission added evaluator for: endpoints
	I0829 19:55:37.230462       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [286aa4b9e2fe4ed29522efd5138da93f0d64acbe34bd5ee5d2b1cc688451cca2] <==
	I0829 19:51:29.188899       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-197790-m02"
	I0829 19:51:29.189014       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:51:30.452055       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-197790-m02"
	I0829 19:51:30.454009       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-197790-m03\" does not exist"
	I0829 19:51:30.460975       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-197790-m03" podCIDRs=["10.244.3.0/24"]
	I0829 19:51:30.461301       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:51:30.461422       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:51:30.472349       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:51:30.736352       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:51:31.087950       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:51:31.599767       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:51:40.549662       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:51:49.081079       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-197790-m02"
	I0829 19:51:49.081382       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:51:49.093108       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:51:51.563700       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:52:31.582479       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-197790-m02"
	I0829 19:52:31.585976       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:52:31.590914       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m02"
	I0829 19:52:31.620323       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:52:31.620837       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m02"
	I0829 19:52:31.627202       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="19.300972ms"
	I0829 19:52:31.629848       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="35.66µs"
	I0829 19:52:36.716408       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m02"
	I0829 19:52:46.799761       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	
	
	==> kube-controller-manager [d1ca0678b248f6b562721904843ac20264ab8cd7b69b73c49d244b14fc3cdd2a] <==
	I0829 19:56:53.541814       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-197790-m03" podCIDRs=["10.244.2.0/24"]
	I0829 19:56:53.542787       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:56:53.542933       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:56:53.561576       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:56:53.864992       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:56:54.189160       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:56:57.062203       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:57:03.933107       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:57:12.136595       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:57:12.136789       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-197790-m02"
	I0829 19:57:12.148500       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:57:16.838642       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:57:16.859573       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:57:17.053345       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:57:17.309120       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m03"
	I0829 19:57:17.309541       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-197790-m02"
	I0829 19:57:57.074132       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m02"
	I0829 19:57:57.096439       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m02"
	I0829 19:57:57.108220       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.317096ms"
	I0829 19:57:57.108906       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="42.383µs"
	I0829 19:58:02.190363       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-197790-m02"
	I0829 19:58:17.001582       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-g2tpt"
	I0829 19:58:17.025211       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-g2tpt"
	I0829 19:58:17.025295       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-6rnwz"
	I0829 19:58:17.045857       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-6rnwz"
	
	
	==> kube-proxy [5f9d9f10866771e532eeb4564f1df6410e9ed5256f0527579efd3c90184a0824] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 19:55:34.940363       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 19:55:34.953215       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.245"]
	E0829 19:55:34.953293       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 19:55:35.017047       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 19:55:35.017097       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 19:55:35.017127       1 server_linux.go:169] "Using iptables Proxier"
	I0829 19:55:35.022454       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 19:55:35.024127       1 server.go:483] "Version info" version="v1.31.0"
	I0829 19:55:35.024558       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:55:35.033016       1 config.go:197] "Starting service config controller"
	I0829 19:55:35.033122       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 19:55:35.033199       1 config.go:104] "Starting endpoint slice config controller"
	I0829 19:55:35.033223       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 19:55:35.033828       1 config.go:326] "Starting node config controller"
	I0829 19:55:35.033900       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 19:55:35.134223       1 shared_informer.go:320] Caches are synced for node config
	I0829 19:55:35.134252       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 19:55:35.134257       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [e676040af61f22a0c5fce59e3da10bb089a18f9ac5e399ff933823362e94206c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 19:49:03.588420       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 19:49:03.630697       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.245"]
	E0829 19:49:03.631120       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 19:49:03.689263       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 19:49:03.689298       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 19:49:03.689323       1 server_linux.go:169] "Using iptables Proxier"
	I0829 19:49:03.691918       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 19:49:03.692156       1 server.go:483] "Version info" version="v1.31.0"
	I0829 19:49:03.692172       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:49:03.693981       1 config.go:197] "Starting service config controller"
	I0829 19:49:03.694074       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 19:49:03.694159       1 config.go:104] "Starting endpoint slice config controller"
	I0829 19:49:03.694172       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 19:49:03.695691       1 config.go:326] "Starting node config controller"
	I0829 19:49:03.696530       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 19:49:03.794643       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 19:49:03.794837       1 shared_informer.go:320] Caches are synced for service config
	I0829 19:49:03.798399       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4e91e07c89ec5477b7fdae0dc2885abb50515ad9a90d475f0a4a97b15ec8d337] <==
	E0829 19:48:54.799502       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:48:54.799539       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0829 19:48:54.799584       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 19:48:54.799614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0829 19:48:54.799587       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:48:54.799754       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 19:48:54.799801       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:48:54.799876       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0829 19:48:54.799932       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 19:48:54.800133       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0829 19:48:54.800163       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0829 19:48:54.800195       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0829 19:48:54.800168       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:48:54.800224       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0829 19:48:54.800278       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 19:48:54.800386       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 19:48:54.800480       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 19:48:55.991569       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 19:48:55.992028       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:48:56.000458       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0829 19:48:56.000675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:48:56.011635       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0829 19:48:56.011682       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0829 19:48:56.392880       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0829 19:53:54.455556       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b2e97593be7b5873f114cb6dd2ad4caf2b3c2260b0b81177da3b4994bb0e3bd0] <==
	I0829 19:55:31.502180       1 serving.go:386] Generated self-signed cert in-memory
	W0829 19:55:33.682510       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0829 19:55:33.682555       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0829 19:55:33.682568       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0829 19:55:33.682576       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0829 19:55:33.721185       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0829 19:55:33.721228       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:55:33.740312       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0829 19:55:33.740479       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0829 19:55:33.740529       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0829 19:55:33.740563       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0829 19:55:33.841481       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 19:58:20 multinode-197790 kubelet[3044]: E0829 19:58:20.129325    3044 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961500128921197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:58:30 multinode-197790 kubelet[3044]: E0829 19:58:30.094333    3044 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 19:58:30 multinode-197790 kubelet[3044]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 19:58:30 multinode-197790 kubelet[3044]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 19:58:30 multinode-197790 kubelet[3044]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 19:58:30 multinode-197790 kubelet[3044]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 19:58:30 multinode-197790 kubelet[3044]: E0829 19:58:30.131053    3044 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961510130828217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:58:30 multinode-197790 kubelet[3044]: E0829 19:58:30.131077    3044 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961510130828217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:58:40 multinode-197790 kubelet[3044]: E0829 19:58:40.133331    3044 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961520132792216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:58:40 multinode-197790 kubelet[3044]: E0829 19:58:40.133473    3044 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961520132792216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:58:50 multinode-197790 kubelet[3044]: E0829 19:58:50.136627    3044 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961530135436860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:58:50 multinode-197790 kubelet[3044]: E0829 19:58:50.137174    3044 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961530135436860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:59:00 multinode-197790 kubelet[3044]: E0829 19:59:00.139956    3044 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961540139266698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:59:00 multinode-197790 kubelet[3044]: E0829 19:59:00.140047    3044 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961540139266698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:59:10 multinode-197790 kubelet[3044]: E0829 19:59:10.143669    3044 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961550142831285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:59:10 multinode-197790 kubelet[3044]: E0829 19:59:10.144565    3044 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961550142831285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:59:20 multinode-197790 kubelet[3044]: E0829 19:59:20.146486    3044 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961560146238227,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:59:20 multinode-197790 kubelet[3044]: E0829 19:59:20.146532    3044 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961560146238227,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:59:30 multinode-197790 kubelet[3044]: E0829 19:59:30.094217    3044 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 19:59:30 multinode-197790 kubelet[3044]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 19:59:30 multinode-197790 kubelet[3044]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 19:59:30 multinode-197790 kubelet[3044]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 19:59:30 multinode-197790 kubelet[3044]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 19:59:30 multinode-197790 kubelet[3044]: E0829 19:59:30.148465    3044 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961570148146669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:59:30 multinode-197790 kubelet[3044]: E0829 19:59:30.148490    3044 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961570148146669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 19:59:38.160394   50675 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19530-11185/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-197790 -n multinode-197790
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-197790 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.34s)

                                                
                                    
x
+
TestPreload (267.82s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-967320 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0829 20:03:45.975203   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-967320 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m6.919621674s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-967320 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-967320 image pull gcr.io/k8s-minikube/busybox: (1.114517647s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-967320
E0829 20:06:37.944625   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-967320: exit status 82 (2m0.44305834s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-967320"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-967320 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-08-29 20:07:42.140579252 +0000 UTC m=+4326.311040497
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-967320 -n test-preload-967320
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-967320 -n test-preload-967320: exit status 3 (18.47396897s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 20:08:00.610928   53616 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.85:22: connect: no route to host
	E0829 20:08:00.610961   53616 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.85:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-967320" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-967320" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-967320
--- FAIL: TestPreload (267.82s)

                                                
                                    
x
+
TestKubernetesUpgrade (370.04s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-714305 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-714305 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m30.741313581s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-714305] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19530
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-714305" primary control-plane node in "kubernetes-upgrade-714305" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 20:09:58.546355   54721 out.go:345] Setting OutFile to fd 1 ...
	I0829 20:09:58.546491   54721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:09:58.546502   54721 out.go:358] Setting ErrFile to fd 2...
	I0829 20:09:58.546510   54721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:09:58.546859   54721 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 20:09:58.548062   54721 out.go:352] Setting JSON to false
	I0829 20:09:58.549004   54721 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6746,"bootTime":1724955453,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 20:09:58.549074   54721 start.go:139] virtualization: kvm guest
	I0829 20:09:58.551115   54721 out.go:177] * [kubernetes-upgrade-714305] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 20:09:58.553456   54721 notify.go:220] Checking for updates...
	I0829 20:09:58.554520   54721 out.go:177]   - MINIKUBE_LOCATION=19530
	I0829 20:09:58.556645   54721 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 20:09:58.559248   54721 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:09:58.561268   54721 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 20:09:58.563774   54721 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 20:09:58.566445   54721 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 20:09:58.568255   54721 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 20:09:58.607499   54721 out.go:177] * Using the kvm2 driver based on user configuration
	I0829 20:09:58.608838   54721 start.go:297] selected driver: kvm2
	I0829 20:09:58.608862   54721 start.go:901] validating driver "kvm2" against <nil>
	I0829 20:09:58.608883   54721 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 20:09:58.609830   54721 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 20:09:58.609928   54721 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19530-11185/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 20:09:58.626742   54721 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 20:09:58.626803   54721 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 20:09:58.627069   54721 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0829 20:09:58.627104   54721 cni.go:84] Creating CNI manager for ""
	I0829 20:09:58.627114   54721 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:09:58.627127   54721 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 20:09:58.627187   54721 start.go:340] cluster config:
	{Name:kubernetes-upgrade-714305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-714305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:09:58.627310   54721 iso.go:125] acquiring lock: {Name:mk1c9d3ac7f423dd4657884e37bdf4359f6328d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 20:09:58.629179   54721 out.go:177] * Starting "kubernetes-upgrade-714305" primary control-plane node in "kubernetes-upgrade-714305" cluster
	I0829 20:09:58.630316   54721 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 20:09:58.630360   54721 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0829 20:09:58.630371   54721 cache.go:56] Caching tarball of preloaded images
	I0829 20:09:58.630457   54721 preload.go:172] Found /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 20:09:58.630467   54721 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0829 20:09:58.630889   54721 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/config.json ...
	I0829 20:09:58.630922   54721 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/config.json: {Name:mk72314a7c97b8e619fb47f9922e1d1486f6b3f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:09:58.631092   54721 start.go:360] acquireMachinesLock for kubernetes-upgrade-714305: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 20:09:58.631139   54721 start.go:364] duration metric: took 22.499µs to acquireMachinesLock for "kubernetes-upgrade-714305"
	I0829 20:09:58.631161   54721 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-714305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-714305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 20:09:58.631239   54721 start.go:125] createHost starting for "" (driver="kvm2")
	I0829 20:09:58.632762   54721 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 20:09:58.632906   54721 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 20:09:58.632954   54721 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:09:58.647897   54721 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33563
	I0829 20:09:58.648420   54721 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:09:58.649012   54721 main.go:141] libmachine: Using API Version  1
	I0829 20:09:58.649028   54721 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:09:58.649359   54721 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:09:58.649543   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetMachineName
	I0829 20:09:58.649681   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .DriverName
	I0829 20:09:58.649816   54721 start.go:159] libmachine.API.Create for "kubernetes-upgrade-714305" (driver="kvm2")
	I0829 20:09:58.649861   54721 client.go:168] LocalClient.Create starting
	I0829 20:09:58.649893   54721 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem
	I0829 20:09:58.649927   54721 main.go:141] libmachine: Decoding PEM data...
	I0829 20:09:58.649951   54721 main.go:141] libmachine: Parsing certificate...
	I0829 20:09:58.650020   54721 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem
	I0829 20:09:58.650047   54721 main.go:141] libmachine: Decoding PEM data...
	I0829 20:09:58.650063   54721 main.go:141] libmachine: Parsing certificate...
	I0829 20:09:58.650085   54721 main.go:141] libmachine: Running pre-create checks...
	I0829 20:09:58.650097   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .PreCreateCheck
	I0829 20:09:58.650504   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetConfigRaw
	I0829 20:09:58.650943   54721 main.go:141] libmachine: Creating machine...
	I0829 20:09:58.650962   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .Create
	I0829 20:09:58.651100   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Creating KVM machine...
	I0829 20:09:58.652282   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found existing default KVM network
	I0829 20:09:58.653156   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | I0829 20:09:58.652997   54792 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002091c0}
	I0829 20:09:58.653181   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | created network xml: 
	I0829 20:09:58.653194   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | <network>
	I0829 20:09:58.653203   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG |   <name>mk-kubernetes-upgrade-714305</name>
	I0829 20:09:58.653219   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG |   <dns enable='no'/>
	I0829 20:09:58.653230   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG |   
	I0829 20:09:58.653241   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0829 20:09:58.653252   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG |     <dhcp>
	I0829 20:09:58.653263   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0829 20:09:58.653272   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG |     </dhcp>
	I0829 20:09:58.653303   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG |   </ip>
	I0829 20:09:58.653314   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG |   
	I0829 20:09:58.653326   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | </network>
	I0829 20:09:58.653337   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | 
	I0829 20:09:58.659097   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | trying to create private KVM network mk-kubernetes-upgrade-714305 192.168.39.0/24...
	I0829 20:09:58.725691   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | private KVM network mk-kubernetes-upgrade-714305 192.168.39.0/24 created
	I0829 20:09:58.725746   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | I0829 20:09:58.725671   54792 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 20:09:58.725782   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Setting up store path in /home/jenkins/minikube-integration/19530-11185/.minikube/machines/kubernetes-upgrade-714305 ...
	I0829 20:09:58.725802   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Building disk image from file:///home/jenkins/minikube-integration/19530-11185/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso
	I0829 20:09:58.725962   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Downloading /home/jenkins/minikube-integration/19530-11185/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19530-11185/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso...
	I0829 20:09:58.964275   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | I0829 20:09:58.964138   54792 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/kubernetes-upgrade-714305/id_rsa...
	I0829 20:09:59.222506   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | I0829 20:09:59.222376   54792 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/kubernetes-upgrade-714305/kubernetes-upgrade-714305.rawdisk...
	I0829 20:09:59.222552   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | Writing magic tar header
	I0829 20:09:59.222577   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | Writing SSH key tar header
	I0829 20:09:59.222587   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | I0829 20:09:59.222487   54792 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19530-11185/.minikube/machines/kubernetes-upgrade-714305 ...
	I0829 20:09:59.222640   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/kubernetes-upgrade-714305
	I0829 20:09:59.222661   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube/machines/kubernetes-upgrade-714305 (perms=drwx------)
	I0829 20:09:59.222673   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube/machines
	I0829 20:09:59.222691   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube/machines (perms=drwxr-xr-x)
	I0829 20:09:59.222705   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 20:09:59.222739   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube (perms=drwxr-xr-x)
	I0829 20:09:59.222768   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185 (perms=drwxrwxr-x)
	I0829 20:09:59.222781   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185
	I0829 20:09:59.222802   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0829 20:09:59.222826   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | Checking permissions on dir: /home/jenkins
	I0829 20:09:59.222839   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | Checking permissions on dir: /home
	I0829 20:09:59.222849   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | Skipping /home - not owner
	I0829 20:09:59.222876   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0829 20:09:59.222889   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0829 20:09:59.222897   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Creating domain...
	I0829 20:09:59.223817   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) define libvirt domain using xml: 
	I0829 20:09:59.223828   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) <domain type='kvm'>
	I0829 20:09:59.223835   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)   <name>kubernetes-upgrade-714305</name>
	I0829 20:09:59.223843   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)   <memory unit='MiB'>2200</memory>
	I0829 20:09:59.223928   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)   <vcpu>2</vcpu>
	I0829 20:09:59.223965   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)   <features>
	I0829 20:09:59.223992   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)     <acpi/>
	I0829 20:09:59.224005   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)     <apic/>
	I0829 20:09:59.224011   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)     <pae/>
	I0829 20:09:59.224025   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)     
	I0829 20:09:59.224031   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)   </features>
	I0829 20:09:59.224039   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)   <cpu mode='host-passthrough'>
	I0829 20:09:59.224067   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)   
	I0829 20:09:59.224089   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)   </cpu>
	I0829 20:09:59.224103   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)   <os>
	I0829 20:09:59.224115   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)     <type>hvm</type>
	I0829 20:09:59.224128   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)     <boot dev='cdrom'/>
	I0829 20:09:59.224140   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)     <boot dev='hd'/>
	I0829 20:09:59.224174   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)     <bootmenu enable='no'/>
	I0829 20:09:59.224190   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)   </os>
	I0829 20:09:59.224202   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)   <devices>
	I0829 20:09:59.224215   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)     <disk type='file' device='cdrom'>
	I0829 20:09:59.224233   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)       <source file='/home/jenkins/minikube-integration/19530-11185/.minikube/machines/kubernetes-upgrade-714305/boot2docker.iso'/>
	I0829 20:09:59.224246   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)       <target dev='hdc' bus='scsi'/>
	I0829 20:09:59.224258   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)       <readonly/>
	I0829 20:09:59.224273   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)     </disk>
	I0829 20:09:59.224287   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)     <disk type='file' device='disk'>
	I0829 20:09:59.224299   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0829 20:09:59.224317   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)       <source file='/home/jenkins/minikube-integration/19530-11185/.minikube/machines/kubernetes-upgrade-714305/kubernetes-upgrade-714305.rawdisk'/>
	I0829 20:09:59.224333   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)       <target dev='hda' bus='virtio'/>
	I0829 20:09:59.224352   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)     </disk>
	I0829 20:09:59.224368   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)     <interface type='network'>
	I0829 20:09:59.224382   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)       <source network='mk-kubernetes-upgrade-714305'/>
	I0829 20:09:59.224395   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)       <model type='virtio'/>
	I0829 20:09:59.224408   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)     </interface>
	I0829 20:09:59.224420   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)     <interface type='network'>
	I0829 20:09:59.224433   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)       <source network='default'/>
	I0829 20:09:59.224445   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)       <model type='virtio'/>
	I0829 20:09:59.224456   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)     </interface>
	I0829 20:09:59.224477   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)     <serial type='pty'>
	I0829 20:09:59.224493   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)       <target port='0'/>
	I0829 20:09:59.224506   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)     </serial>
	I0829 20:09:59.224517   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)     <console type='pty'>
	I0829 20:09:59.224531   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)       <target type='serial' port='0'/>
	I0829 20:09:59.224540   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)     </console>
	I0829 20:09:59.224553   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)     <rng model='virtio'>
	I0829 20:09:59.224568   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)       <backend model='random'>/dev/random</backend>
	I0829 20:09:59.224580   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)     </rng>
	I0829 20:09:59.224590   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)     
	I0829 20:09:59.224601   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)     
	I0829 20:09:59.224610   54721 main.go:141] libmachine: (kubernetes-upgrade-714305)   </devices>
	I0829 20:09:59.224622   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) </domain>
	I0829 20:09:59.224632   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) 
	I0829 20:09:59.228718   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:ab:e4:b3 in network default
	I0829 20:09:59.229230   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Ensuring networks are active...
	I0829 20:09:59.229253   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:09:59.229869   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Ensuring network default is active
	I0829 20:09:59.230101   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Ensuring network mk-kubernetes-upgrade-714305 is active
	I0829 20:09:59.230665   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Getting domain xml...
	I0829 20:09:59.231355   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Creating domain...
	I0829 20:10:00.424202   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Waiting to get IP...
	I0829 20:10:00.424944   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:00.425353   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | unable to find current IP address of domain kubernetes-upgrade-714305 in network mk-kubernetes-upgrade-714305
	I0829 20:10:00.425376   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | I0829 20:10:00.425330   54792 retry.go:31] will retry after 240.717901ms: waiting for machine to come up
	I0829 20:10:00.667748   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:00.668274   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | unable to find current IP address of domain kubernetes-upgrade-714305 in network mk-kubernetes-upgrade-714305
	I0829 20:10:00.668303   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | I0829 20:10:00.668224   54792 retry.go:31] will retry after 261.755687ms: waiting for machine to come up
	I0829 20:10:00.931730   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:00.932169   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | unable to find current IP address of domain kubernetes-upgrade-714305 in network mk-kubernetes-upgrade-714305
	I0829 20:10:00.932196   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | I0829 20:10:00.932141   54792 retry.go:31] will retry after 312.807067ms: waiting for machine to come up
	I0829 20:10:01.246682   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:01.247136   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | unable to find current IP address of domain kubernetes-upgrade-714305 in network mk-kubernetes-upgrade-714305
	I0829 20:10:01.247160   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | I0829 20:10:01.247051   54792 retry.go:31] will retry after 399.750604ms: waiting for machine to come up
	I0829 20:10:01.648625   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:01.649081   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | unable to find current IP address of domain kubernetes-upgrade-714305 in network mk-kubernetes-upgrade-714305
	I0829 20:10:01.649109   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | I0829 20:10:01.649027   54792 retry.go:31] will retry after 651.94962ms: waiting for machine to come up
	I0829 20:10:02.302977   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:02.303378   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | unable to find current IP address of domain kubernetes-upgrade-714305 in network mk-kubernetes-upgrade-714305
	I0829 20:10:02.303404   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | I0829 20:10:02.303334   54792 retry.go:31] will retry after 820.837619ms: waiting for machine to come up
	I0829 20:10:03.125376   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:03.125867   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | unable to find current IP address of domain kubernetes-upgrade-714305 in network mk-kubernetes-upgrade-714305
	I0829 20:10:03.125916   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | I0829 20:10:03.125818   54792 retry.go:31] will retry after 1.138412025s: waiting for machine to come up
	I0829 20:10:04.265642   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:04.266076   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | unable to find current IP address of domain kubernetes-upgrade-714305 in network mk-kubernetes-upgrade-714305
	I0829 20:10:04.266128   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | I0829 20:10:04.265995   54792 retry.go:31] will retry after 1.088831978s: waiting for machine to come up
	I0829 20:10:05.356752   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:05.357121   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | unable to find current IP address of domain kubernetes-upgrade-714305 in network mk-kubernetes-upgrade-714305
	I0829 20:10:05.357194   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | I0829 20:10:05.357045   54792 retry.go:31] will retry after 1.574455626s: waiting for machine to come up
	I0829 20:10:06.933701   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:06.934118   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | unable to find current IP address of domain kubernetes-upgrade-714305 in network mk-kubernetes-upgrade-714305
	I0829 20:10:06.934144   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | I0829 20:10:06.934082   54792 retry.go:31] will retry after 1.447032224s: waiting for machine to come up
	I0829 20:10:08.382328   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:08.382603   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | unable to find current IP address of domain kubernetes-upgrade-714305 in network mk-kubernetes-upgrade-714305
	I0829 20:10:08.382637   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | I0829 20:10:08.382563   54792 retry.go:31] will retry after 2.389800085s: waiting for machine to come up
	I0829 20:10:10.775128   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:10.775608   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | unable to find current IP address of domain kubernetes-upgrade-714305 in network mk-kubernetes-upgrade-714305
	I0829 20:10:10.775635   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | I0829 20:10:10.775561   54792 retry.go:31] will retry after 3.005349235s: waiting for machine to come up
	I0829 20:10:13.782313   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:13.782719   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | unable to find current IP address of domain kubernetes-upgrade-714305 in network mk-kubernetes-upgrade-714305
	I0829 20:10:13.782746   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | I0829 20:10:13.782689   54792 retry.go:31] will retry after 4.130573745s: waiting for machine to come up
	I0829 20:10:17.918529   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:17.919044   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | unable to find current IP address of domain kubernetes-upgrade-714305 in network mk-kubernetes-upgrade-714305
	I0829 20:10:17.919064   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | I0829 20:10:17.919002   54792 retry.go:31] will retry after 4.954730503s: waiting for machine to come up
	I0829 20:10:22.878147   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:22.878516   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Found IP for machine: 192.168.39.140
	I0829 20:10:22.878571   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has current primary IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:22.878582   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Reserving static IP address...
	I0829 20:10:22.878922   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-714305", mac: "52:54:00:23:3b:44", ip: "192.168.39.140"} in network mk-kubernetes-upgrade-714305
	I0829 20:10:22.950057   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | Getting to WaitForSSH function...
	I0829 20:10:22.950085   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Reserved static IP address: 192.168.39.140
	I0829 20:10:22.950118   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Waiting for SSH to be available...
	I0829 20:10:22.952441   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:22.952930   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:10:13 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:minikube Clientid:01:52:54:00:23:3b:44}
	I0829 20:10:22.952976   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:22.953083   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | Using SSH client type: external
	I0829 20:10:22.953110   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/kubernetes-upgrade-714305/id_rsa (-rw-------)
	I0829 20:10:22.953138   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/kubernetes-upgrade-714305/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:10:22.953152   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | About to run SSH command:
	I0829 20:10:22.953169   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | exit 0
	I0829 20:10:23.078813   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | SSH cmd err, output: <nil>: 
	I0829 20:10:23.079124   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) KVM machine creation complete!
	I0829 20:10:23.079490   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetConfigRaw
	I0829 20:10:23.080022   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .DriverName
	I0829 20:10:23.080222   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .DriverName
	I0829 20:10:23.080377   54721 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0829 20:10:23.080391   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetState
	I0829 20:10:23.081623   54721 main.go:141] libmachine: Detecting operating system of created instance...
	I0829 20:10:23.081636   54721 main.go:141] libmachine: Waiting for SSH to be available...
	I0829 20:10:23.081641   54721 main.go:141] libmachine: Getting to WaitForSSH function...
	I0829 20:10:23.081647   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHHostname
	I0829 20:10:23.083631   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:23.084013   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:10:13 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:10:23.084042   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:23.084103   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHPort
	I0829 20:10:23.084278   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:10:23.084448   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:10:23.084591   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHUsername
	I0829 20:10:23.084732   54721 main.go:141] libmachine: Using SSH client type: native
	I0829 20:10:23.084925   54721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0829 20:10:23.084939   54721 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0829 20:10:23.193956   54721 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:10:23.193981   54721 main.go:141] libmachine: Detecting the provisioner...
	I0829 20:10:23.193989   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHHostname
	I0829 20:10:23.196850   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:23.197280   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:10:13 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:10:23.197314   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:23.197484   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHPort
	I0829 20:10:23.197686   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:10:23.197820   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:10:23.197956   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHUsername
	I0829 20:10:23.198063   54721 main.go:141] libmachine: Using SSH client type: native
	I0829 20:10:23.198235   54721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0829 20:10:23.198249   54721 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0829 20:10:23.308504   54721 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0829 20:10:23.308562   54721 main.go:141] libmachine: found compatible host: buildroot
	I0829 20:10:23.308569   54721 main.go:141] libmachine: Provisioning with buildroot...
	I0829 20:10:23.308582   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetMachineName
	I0829 20:10:23.308842   54721 buildroot.go:166] provisioning hostname "kubernetes-upgrade-714305"
	I0829 20:10:23.308864   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetMachineName
	I0829 20:10:23.309051   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHHostname
	I0829 20:10:23.311833   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:23.312212   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:10:13 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:10:23.312244   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:23.312361   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHPort
	I0829 20:10:23.312535   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:10:23.312689   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:10:23.312842   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHUsername
	I0829 20:10:23.313008   54721 main.go:141] libmachine: Using SSH client type: native
	I0829 20:10:23.313174   54721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0829 20:10:23.313197   54721 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-714305 && echo "kubernetes-upgrade-714305" | sudo tee /etc/hostname
	I0829 20:10:23.438555   54721 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-714305
	
	I0829 20:10:23.438583   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHHostname
	I0829 20:10:23.441064   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:23.441339   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:10:13 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:10:23.441366   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:23.441505   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHPort
	I0829 20:10:23.441676   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:10:23.441801   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:10:23.441950   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHUsername
	I0829 20:10:23.442123   54721 main.go:141] libmachine: Using SSH client type: native
	I0829 20:10:23.442290   54721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0829 20:10:23.442306   54721 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-714305' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-714305/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-714305' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:10:23.559846   54721 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:10:23.559883   54721 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:10:23.559904   54721 buildroot.go:174] setting up certificates
	I0829 20:10:23.559929   54721 provision.go:84] configureAuth start
	I0829 20:10:23.559941   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetMachineName
	I0829 20:10:23.560182   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetIP
	I0829 20:10:23.562452   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:23.562750   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:10:13 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:10:23.562784   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:23.562976   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHHostname
	I0829 20:10:23.565203   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:23.565478   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:10:13 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:10:23.565506   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:23.565650   54721 provision.go:143] copyHostCerts
	I0829 20:10:23.565699   54721 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:10:23.565716   54721 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:10:23.565781   54721 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:10:23.565881   54721 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:10:23.565889   54721 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:10:23.565915   54721 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:10:23.565976   54721 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:10:23.565983   54721 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:10:23.566004   54721 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:10:23.566060   54721 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-714305 san=[127.0.0.1 192.168.39.140 kubernetes-upgrade-714305 localhost minikube]
	I0829 20:10:23.629074   54721 provision.go:177] copyRemoteCerts
	I0829 20:10:23.629127   54721 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:10:23.629148   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHHostname
	I0829 20:10:23.631474   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:23.631739   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:10:13 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:10:23.631770   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:23.631907   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHPort
	I0829 20:10:23.632098   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:10:23.632243   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHUsername
	I0829 20:10:23.632404   54721 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/kubernetes-upgrade-714305/id_rsa Username:docker}
	I0829 20:10:23.716695   54721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:10:23.740417   54721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0829 20:10:23.764025   54721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 20:10:23.790587   54721 provision.go:87] duration metric: took 230.645084ms to configureAuth
	I0829 20:10:23.790611   54721 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:10:23.790765   54721 config.go:182] Loaded profile config "kubernetes-upgrade-714305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0829 20:10:23.790838   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHHostname
	I0829 20:10:23.793020   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:23.793369   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:10:13 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:10:23.793404   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:23.793590   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHPort
	I0829 20:10:23.793762   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:10:23.793916   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:10:23.794027   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHUsername
	I0829 20:10:23.794212   54721 main.go:141] libmachine: Using SSH client type: native
	I0829 20:10:23.794367   54721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0829 20:10:23.794386   54721 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:10:24.013708   54721 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:10:24.013733   54721 main.go:141] libmachine: Checking connection to Docker...
	I0829 20:10:24.013744   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetURL
	I0829 20:10:24.014936   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | Using libvirt version 6000000
	I0829 20:10:24.017046   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:24.017369   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:10:13 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:10:24.017410   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:24.017543   54721 main.go:141] libmachine: Docker is up and running!
	I0829 20:10:24.017559   54721 main.go:141] libmachine: Reticulating splines...
	I0829 20:10:24.017567   54721 client.go:171] duration metric: took 25.367695012s to LocalClient.Create
	I0829 20:10:24.017593   54721 start.go:167] duration metric: took 25.367777448s to libmachine.API.Create "kubernetes-upgrade-714305"
	I0829 20:10:24.017605   54721 start.go:293] postStartSetup for "kubernetes-upgrade-714305" (driver="kvm2")
	I0829 20:10:24.017618   54721 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:10:24.017641   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .DriverName
	I0829 20:10:24.017886   54721 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:10:24.017925   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHHostname
	I0829 20:10:24.019977   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:24.020273   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:10:13 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:10:24.020302   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:24.020404   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHPort
	I0829 20:10:24.020562   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:10:24.020713   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHUsername
	I0829 20:10:24.020864   54721 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/kubernetes-upgrade-714305/id_rsa Username:docker}
	I0829 20:10:24.105268   54721 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:10:24.109662   54721 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:10:24.109681   54721 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:10:24.109734   54721 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:10:24.109809   54721 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:10:24.109908   54721 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:10:24.119248   54721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:10:24.142984   54721 start.go:296] duration metric: took 125.352257ms for postStartSetup
	I0829 20:10:24.143038   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetConfigRaw
	I0829 20:10:24.143583   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetIP
	I0829 20:10:24.145730   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:24.146039   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:10:13 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:10:24.146059   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:24.146326   54721 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/config.json ...
	I0829 20:10:24.146550   54721 start.go:128] duration metric: took 25.515286063s to createHost
	I0829 20:10:24.146576   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHHostname
	I0829 20:10:24.148510   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:24.148798   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:10:13 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:10:24.148824   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:24.148955   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHPort
	I0829 20:10:24.149132   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:10:24.149282   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:10:24.149423   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHUsername
	I0829 20:10:24.149576   54721 main.go:141] libmachine: Using SSH client type: native
	I0829 20:10:24.149769   54721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0829 20:10:24.149795   54721 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:10:24.259245   54721 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724962224.237471948
	
	I0829 20:10:24.259267   54721 fix.go:216] guest clock: 1724962224.237471948
	I0829 20:10:24.259295   54721 fix.go:229] Guest: 2024-08-29 20:10:24.237471948 +0000 UTC Remote: 2024-08-29 20:10:24.146563153 +0000 UTC m=+25.650510972 (delta=90.908795ms)
	I0829 20:10:24.259322   54721 fix.go:200] guest clock delta is within tolerance: 90.908795ms
	I0829 20:10:24.259330   54721 start.go:83] releasing machines lock for "kubernetes-upgrade-714305", held for 25.628179063s
	I0829 20:10:24.259361   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .DriverName
	I0829 20:10:24.259608   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetIP
	I0829 20:10:24.262388   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:24.262797   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:10:13 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:10:24.262819   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:24.263077   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .DriverName
	I0829 20:10:24.263551   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .DriverName
	I0829 20:10:24.263730   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .DriverName
	I0829 20:10:24.263819   54721 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:10:24.263862   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHHostname
	I0829 20:10:24.263882   54721 ssh_runner.go:195] Run: cat /version.json
	I0829 20:10:24.263908   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHHostname
	I0829 20:10:24.266483   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:24.266665   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:24.266933   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:10:13 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:10:24.266959   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:24.266992   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:10:13 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:10:24.267008   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:24.267085   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHPort
	I0829 20:10:24.267219   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHPort
	I0829 20:10:24.267291   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:10:24.267354   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:10:24.267437   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHUsername
	I0829 20:10:24.267525   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHUsername
	I0829 20:10:24.267608   54721 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/kubernetes-upgrade-714305/id_rsa Username:docker}
	I0829 20:10:24.267705   54721 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/kubernetes-upgrade-714305/id_rsa Username:docker}
	I0829 20:10:24.372187   54721 ssh_runner.go:195] Run: systemctl --version
	I0829 20:10:24.378646   54721 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:10:24.541993   54721 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:10:24.550522   54721 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:10:24.550615   54721 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:10:24.573904   54721 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:10:24.573926   54721 start.go:495] detecting cgroup driver to use...
	I0829 20:10:24.573975   54721 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:10:24.594451   54721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:10:24.612970   54721 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:10:24.613016   54721 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:10:24.626364   54721 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:10:24.639480   54721 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:10:24.750344   54721 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:10:24.918842   54721 docker.go:233] disabling docker service ...
	I0829 20:10:24.918921   54721 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:10:24.933465   54721 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:10:24.947068   54721 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:10:25.090610   54721 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:10:25.244918   54721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:10:25.258753   54721 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:10:25.277272   54721 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0829 20:10:25.277336   54721 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:10:25.287539   54721 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:10:25.287606   54721 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:10:25.298142   54721 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:10:25.308772   54721 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:10:25.320018   54721 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:10:25.331307   54721 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:10:25.340973   54721 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:10:25.341027   54721 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:10:25.355709   54721 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:10:25.365995   54721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:10:25.499690   54721 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:10:25.596727   54721 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:10:25.596810   54721 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:10:25.602315   54721 start.go:563] Will wait 60s for crictl version
	I0829 20:10:25.602367   54721 ssh_runner.go:195] Run: which crictl
	I0829 20:10:25.606219   54721 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:10:25.653497   54721 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:10:25.653573   54721 ssh_runner.go:195] Run: crio --version
	I0829 20:10:25.686844   54721 ssh_runner.go:195] Run: crio --version
	I0829 20:10:25.717093   54721 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0829 20:10:25.718340   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetIP
	I0829 20:10:25.721356   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:25.721853   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:10:13 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:10:25.721885   54721 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:10:25.722105   54721 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 20:10:25.726380   54721 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:10:25.739339   54721 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-714305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-714305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:10:25.739457   54721 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 20:10:25.739498   54721 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:10:25.771323   54721 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 20:10:25.771383   54721 ssh_runner.go:195] Run: which lz4
	I0829 20:10:25.776022   54721 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 20:10:25.780077   54721 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 20:10:25.780107   54721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0829 20:10:27.479193   54721 crio.go:462] duration metric: took 1.703203518s to copy over tarball
	I0829 20:10:27.479264   54721 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 20:10:30.004744   54721 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.525451945s)
	I0829 20:10:30.004770   54721 crio.go:469] duration metric: took 2.52555115s to extract the tarball
	I0829 20:10:30.004778   54721 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 20:10:30.048129   54721 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:10:30.094419   54721 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 20:10:30.094441   54721 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 20:10:30.094505   54721 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:10:30.094554   54721 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:10:30.094561   54721 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:10:30.094579   54721 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:10:30.094517   54721 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:10:30.094603   54721 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0829 20:10:30.094616   54721 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0829 20:10:30.095250   54721 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:10:30.097431   54721 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:10:30.097493   54721 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0829 20:10:30.097431   54721 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:10:30.097525   54721 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:10:30.097431   54721 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0829 20:10:30.097437   54721 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:10:30.097461   54721 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:10:30.097752   54721 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:10:30.281945   54721 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:10:30.285664   54721 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:10:30.295249   54721 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:10:30.296332   54721 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:10:30.298189   54721 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0829 20:10:30.322328   54721 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0829 20:10:30.326908   54721 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0829 20:10:30.382818   54721 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0829 20:10:30.382861   54721 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:10:30.382937   54721 ssh_runner.go:195] Run: which crictl
	I0829 20:10:30.392702   54721 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:10:30.399204   54721 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0829 20:10:30.399245   54721 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:10:30.399287   54721 ssh_runner.go:195] Run: which crictl
	I0829 20:10:30.439066   54721 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0829 20:10:30.439106   54721 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:10:30.439156   54721 ssh_runner.go:195] Run: which crictl
	I0829 20:10:30.472083   54721 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0829 20:10:30.472124   54721 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0829 20:10:30.472169   54721 ssh_runner.go:195] Run: which crictl
	I0829 20:10:30.472276   54721 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0829 20:10:30.472295   54721 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:10:30.472321   54721 ssh_runner.go:195] Run: which crictl
	I0829 20:10:30.497489   54721 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0829 20:10:30.497523   54721 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0829 20:10:30.497526   54721 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:10:30.497541   54721 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0829 20:10:30.497573   54721 ssh_runner.go:195] Run: which crictl
	I0829 20:10:30.497607   54721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:10:30.497573   54721 ssh_runner.go:195] Run: which crictl
	I0829 20:10:30.591217   54721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:10:30.591242   54721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:10:30.591269   54721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:10:30.591293   54721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:10:30.591323   54721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:10:30.591421   54721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:10:30.591433   54721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:10:30.739470   54721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:10:30.739529   54721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:10:30.739592   54721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:10:30.739612   54721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:10:30.739599   54721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:10:30.739664   54721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:10:30.739702   54721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:10:30.898668   54721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:10:30.898705   54721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:10:30.898722   54721 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0829 20:10:30.898782   54721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:10:30.898862   54721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:10:30.898867   54721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:10:30.898929   54721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:10:31.028235   54721 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0829 20:10:31.028281   54721 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0829 20:10:31.028289   54721 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0829 20:10:31.028320   54721 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0829 20:10:31.028348   54721 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0829 20:10:31.028390   54721 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0829 20:10:31.028434   54721 cache_images.go:92] duration metric: took 933.980572ms to LoadCachedImages
	W0829 20:10:31.028507   54721 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0829 20:10:31.028522   54721 kubeadm.go:934] updating node { 192.168.39.140 8443 v1.20.0 crio true true} ...
	I0829 20:10:31.028640   54721 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-714305 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-714305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:10:31.028711   54721 ssh_runner.go:195] Run: crio config
	I0829 20:10:31.074273   54721 cni.go:84] Creating CNI manager for ""
	I0829 20:10:31.074295   54721 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:10:31.074306   54721 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:10:31.074325   54721 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.140 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-714305 NodeName:kubernetes-upgrade-714305 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0829 20:10:31.074449   54721 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.140
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-714305"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:10:31.074505   54721 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0829 20:10:31.084947   54721 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:10:31.085007   54721 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:10:31.094732   54721 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0829 20:10:31.112231   54721 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:10:31.129123   54721 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0829 20:10:31.145938   54721 ssh_runner.go:195] Run: grep 192.168.39.140	control-plane.minikube.internal$ /etc/hosts
	I0829 20:10:31.149899   54721 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:10:31.161902   54721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:10:31.288740   54721 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:10:31.306474   54721 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305 for IP: 192.168.39.140
	I0829 20:10:31.306501   54721 certs.go:194] generating shared ca certs ...
	I0829 20:10:31.306522   54721 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:10:31.306702   54721 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:10:31.306753   54721 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:10:31.306765   54721 certs.go:256] generating profile certs ...
	I0829 20:10:31.306833   54721 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/client.key
	I0829 20:10:31.306851   54721 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/client.crt with IP's: []
	I0829 20:10:31.549578   54721 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/client.crt ...
	I0829 20:10:31.549604   54721 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/client.crt: {Name:mk3a3b5bb7e552217327c78f9ef9b2062e3087a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:10:31.549758   54721 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/client.key ...
	I0829 20:10:31.549770   54721 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/client.key: {Name:mkf9291a020726f24ced706371d1ce7049904bab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:10:31.549848   54721 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/apiserver.key.53a13bde
	I0829 20:10:31.549864   54721 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/apiserver.crt.53a13bde with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.140]
	I0829 20:10:31.688064   54721 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/apiserver.crt.53a13bde ...
	I0829 20:10:31.688090   54721 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/apiserver.crt.53a13bde: {Name:mkd138bd21b6f279c1c28cf68aa2bdde8b2034a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:10:31.688223   54721 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/apiserver.key.53a13bde ...
	I0829 20:10:31.688244   54721 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/apiserver.key.53a13bde: {Name:mkf7ccd1c089a5f3f869e1c2125f5bf745aa7816 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:10:31.688316   54721 certs.go:381] copying /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/apiserver.crt.53a13bde -> /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/apiserver.crt
	I0829 20:10:31.688391   54721 certs.go:385] copying /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/apiserver.key.53a13bde -> /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/apiserver.key
	I0829 20:10:31.688442   54721 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/proxy-client.key
	I0829 20:10:31.688456   54721 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/proxy-client.crt with IP's: []
	I0829 20:10:31.927988   54721 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/proxy-client.crt ...
	I0829 20:10:31.928021   54721 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/proxy-client.crt: {Name:mk8189314314ab9342817fe5eeaa58b2ab6ca4ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:10:31.928210   54721 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/proxy-client.key ...
	I0829 20:10:31.928229   54721 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/proxy-client.key: {Name:mkadb058afb7055666eb901d87b9eabc018c974b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:10:31.928419   54721 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:10:31.928466   54721 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:10:31.928480   54721 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:10:31.928517   54721 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:10:31.928551   54721 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:10:31.928637   54721 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:10:31.928697   54721 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:10:31.929304   54721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:10:31.955900   54721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:10:31.981925   54721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:10:32.007286   54721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:10:32.033230   54721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0829 20:10:32.059093   54721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 20:10:32.084504   54721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:10:32.117106   54721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 20:10:32.144087   54721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:10:32.171584   54721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:10:32.194982   54721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:10:32.226692   54721 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:10:32.243451   54721 ssh_runner.go:195] Run: openssl version
	I0829 20:10:32.249207   54721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:10:32.260312   54721 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:10:32.264783   54721 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:10:32.264841   54721 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:10:32.270589   54721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:10:32.281454   54721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:10:32.292275   54721 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:10:32.296582   54721 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:10:32.296635   54721 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:10:32.302081   54721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:10:32.313080   54721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:10:32.323854   54721 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:10:32.328411   54721 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:10:32.328470   54721 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:10:32.334415   54721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:10:32.345172   54721 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:10:32.349193   54721 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 20:10:32.349250   54721 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-714305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-714305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:10:32.349339   54721 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:10:32.349391   54721 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:10:32.387172   54721 cri.go:89] found id: ""
	I0829 20:10:32.387243   54721 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:10:32.397783   54721 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:10:32.407975   54721 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:10:32.418076   54721 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:10:32.418095   54721 kubeadm.go:157] found existing configuration files:
	
	I0829 20:10:32.418149   54721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:10:32.427687   54721 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:10:32.427752   54721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:10:32.439124   54721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:10:32.449788   54721 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:10:32.449853   54721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:10:32.461073   54721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:10:32.471732   54721 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:10:32.471799   54721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:10:32.482856   54721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:10:32.493769   54721 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:10:32.493816   54721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:10:32.505077   54721 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:10:32.770809   54721 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:12:30.252836   54721 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 20:12:30.252945   54721 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0829 20:12:30.255061   54721 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 20:12:30.255139   54721 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:12:30.255228   54721 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:12:30.255354   54721 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:12:30.255506   54721 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 20:12:30.255603   54721 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:12:30.257431   54721 out.go:235]   - Generating certificates and keys ...
	I0829 20:12:30.257530   54721 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:12:30.257625   54721 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:12:30.257725   54721 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0829 20:12:30.257806   54721 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0829 20:12:30.257881   54721 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0829 20:12:30.257945   54721 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0829 20:12:30.258016   54721 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0829 20:12:30.258163   54721 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-714305 localhost] and IPs [192.168.39.140 127.0.0.1 ::1]
	I0829 20:12:30.258229   54721 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0829 20:12:30.258391   54721 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-714305 localhost] and IPs [192.168.39.140 127.0.0.1 ::1]
	I0829 20:12:30.258489   54721 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0829 20:12:30.258592   54721 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0829 20:12:30.258657   54721 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0829 20:12:30.258729   54721 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:12:30.258816   54721 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:12:30.258883   54721 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:12:30.258973   54721 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:12:30.259042   54721 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:12:30.259168   54721 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:12:30.259285   54721 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:12:30.259336   54721 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:12:30.259432   54721 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:12:30.261207   54721 out.go:235]   - Booting up control plane ...
	I0829 20:12:30.261304   54721 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:12:30.261389   54721 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:12:30.261503   54721 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:12:30.261621   54721 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:12:30.261851   54721 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 20:12:30.261936   54721 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 20:12:30.262042   54721 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:12:30.262257   54721 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:12:30.262366   54721 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:12:30.262640   54721 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:12:30.262714   54721 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:12:30.262878   54721 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:12:30.262944   54721 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:12:30.263146   54721 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:12:30.263248   54721 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:12:30.263470   54721 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:12:30.263481   54721 kubeadm.go:310] 
	I0829 20:12:30.263514   54721 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 20:12:30.263548   54721 kubeadm.go:310] 		timed out waiting for the condition
	I0829 20:12:30.263554   54721 kubeadm.go:310] 
	I0829 20:12:30.263585   54721 kubeadm.go:310] 	This error is likely caused by:
	I0829 20:12:30.263635   54721 kubeadm.go:310] 		- The kubelet is not running
	I0829 20:12:30.263747   54721 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 20:12:30.263755   54721 kubeadm.go:310] 
	I0829 20:12:30.263843   54721 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 20:12:30.263898   54721 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 20:12:30.263943   54721 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 20:12:30.263952   54721 kubeadm.go:310] 
	I0829 20:12:30.264084   54721 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 20:12:30.264192   54721 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 20:12:30.264202   54721 kubeadm.go:310] 
	I0829 20:12:30.264309   54721 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 20:12:30.264384   54721 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 20:12:30.264470   54721 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 20:12:30.264556   54721 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	W0829 20:12:30.264702   54721 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-714305 localhost] and IPs [192.168.39.140 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-714305 localhost] and IPs [192.168.39.140 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-714305 localhost] and IPs [192.168.39.140 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-714305 localhost] and IPs [192.168.39.140 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0829 20:12:30.264742   54721 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:12:30.265012   54721 kubeadm.go:310] 
	I0829 20:12:31.753342   54721 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.488573158s)
	I0829 20:12:31.753428   54721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:12:31.774851   54721 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:12:31.785904   54721 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:12:31.785923   54721 kubeadm.go:157] found existing configuration files:
	
	I0829 20:12:31.785979   54721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:12:31.795941   54721 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:12:31.796017   54721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:12:31.806200   54721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:12:31.816011   54721 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:12:31.816077   54721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:12:31.829551   54721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:12:31.843177   54721 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:12:31.843251   54721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:12:31.858800   54721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:12:31.873355   54721 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:12:31.873440   54721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:12:31.884539   54721 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:12:31.982503   54721 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 20:12:31.982700   54721 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:12:32.220268   54721 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:12:32.220399   54721 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:12:32.220502   54721 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 20:12:32.472025   54721 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:12:32.475385   54721 out.go:235]   - Generating certificates and keys ...
	I0829 20:12:32.475491   54721 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:12:32.475574   54721 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:12:32.475674   54721 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:12:32.475753   54721 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:12:32.475842   54721 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:12:32.475918   54721 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:12:32.476000   54721 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:12:32.476252   54721 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:12:32.476357   54721 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:12:32.476576   54721 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:12:32.476748   54721 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:12:32.476824   54721 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:12:32.608287   54721 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:12:33.062039   54721 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:12:33.213709   54721 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:12:33.332917   54721 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:12:33.351589   54721 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:12:33.353211   54721 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:12:33.353647   54721 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:12:33.555275   54721 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:12:33.602657   54721 out.go:235]   - Booting up control plane ...
	I0829 20:12:33.602832   54721 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:12:33.602940   54721 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:12:33.603028   54721 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:12:33.603170   54721 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:12:33.603432   54721 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 20:13:13.588570   54721 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 20:13:13.588740   54721 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:13:13.588973   54721 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:13:18.589461   54721 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:13:18.589702   54721 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:13:28.590199   54721 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:13:28.590481   54721 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:13:48.590392   54721 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:13:48.590665   54721 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:14:28.590709   54721 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:14:28.590965   54721 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:14:28.590995   54721 kubeadm.go:310] 
	I0829 20:14:28.591035   54721 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 20:14:28.591078   54721 kubeadm.go:310] 		timed out waiting for the condition
	I0829 20:14:28.591105   54721 kubeadm.go:310] 
	I0829 20:14:28.591167   54721 kubeadm.go:310] 	This error is likely caused by:
	I0829 20:14:28.591222   54721 kubeadm.go:310] 		- The kubelet is not running
	I0829 20:14:28.591373   54721 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 20:14:28.591383   54721 kubeadm.go:310] 
	I0829 20:14:28.591528   54721 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 20:14:28.591577   54721 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 20:14:28.591623   54721 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 20:14:28.591633   54721 kubeadm.go:310] 
	I0829 20:14:28.591779   54721 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 20:14:28.591901   54721 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 20:14:28.591910   54721 kubeadm.go:310] 
	I0829 20:14:28.592076   54721 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 20:14:28.592186   54721 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 20:14:28.592282   54721 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 20:14:28.592373   54721 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 20:14:28.592400   54721 kubeadm.go:310] 
	I0829 20:14:28.593445   54721 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:14:28.593542   54721 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 20:14:28.593638   54721 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0829 20:14:28.593699   54721 kubeadm.go:394] duration metric: took 3m56.244453211s to StartCluster
	I0829 20:14:28.593733   54721 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:14:28.593774   54721 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:14:28.638376   54721 cri.go:89] found id: ""
	I0829 20:14:28.638401   54721 logs.go:276] 0 containers: []
	W0829 20:14:28.638410   54721 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:14:28.638418   54721 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:14:28.638477   54721 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:14:28.673561   54721 cri.go:89] found id: ""
	I0829 20:14:28.673590   54721 logs.go:276] 0 containers: []
	W0829 20:14:28.673600   54721 logs.go:278] No container was found matching "etcd"
	I0829 20:14:28.673607   54721 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:14:28.673674   54721 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:14:28.712506   54721 cri.go:89] found id: ""
	I0829 20:14:28.712528   54721 logs.go:276] 0 containers: []
	W0829 20:14:28.712535   54721 logs.go:278] No container was found matching "coredns"
	I0829 20:14:28.712541   54721 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:14:28.712603   54721 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:14:28.758818   54721 cri.go:89] found id: ""
	I0829 20:14:28.758845   54721 logs.go:276] 0 containers: []
	W0829 20:14:28.758857   54721 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:14:28.758882   54721 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:14:28.758943   54721 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:14:28.795818   54721 cri.go:89] found id: ""
	I0829 20:14:28.795852   54721 logs.go:276] 0 containers: []
	W0829 20:14:28.795860   54721 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:14:28.795868   54721 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:14:28.795937   54721 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:14:28.831791   54721 cri.go:89] found id: ""
	I0829 20:14:28.831815   54721 logs.go:276] 0 containers: []
	W0829 20:14:28.831823   54721 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:14:28.831829   54721 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:14:28.831873   54721 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:14:28.871129   54721 cri.go:89] found id: ""
	I0829 20:14:28.871156   54721 logs.go:276] 0 containers: []
	W0829 20:14:28.871166   54721 logs.go:278] No container was found matching "kindnet"
	I0829 20:14:28.871180   54721 logs.go:123] Gathering logs for container status ...
	I0829 20:14:28.871195   54721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:14:28.909687   54721 logs.go:123] Gathering logs for kubelet ...
	I0829 20:14:28.909717   54721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:14:28.963540   54721 logs.go:123] Gathering logs for dmesg ...
	I0829 20:14:28.963576   54721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:14:28.977526   54721 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:14:28.977567   54721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:14:29.110234   54721 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:14:29.110259   54721 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:14:29.110275   54721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0829 20:14:29.222218   54721 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0829 20:14:29.222286   54721 out.go:270] * 
	* 
	W0829 20:14:29.222352   54721 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 20:14:29.222369   54721 out.go:270] * 
	* 
	W0829 20:14:29.223307   54721 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 20:14:29.226183   54721 out.go:201] 
	W0829 20:14:29.227520   54721 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 20:14:29.227582   54721 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0829 20:14:29.227608   54721 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0829 20:14:29.229177   54721 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-714305 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-714305
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-714305: (1.754053979s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-714305 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-714305 status --format={{.Host}}: exit status 7 (67.831877ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-714305 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-714305 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (58.931287231s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-714305 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-714305 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-714305 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (91.749312ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-714305] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19530
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-714305
	    minikube start -p kubernetes-upgrade-714305 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7143052 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-714305 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-714305 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-714305 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (34.735783715s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-29 20:16:04.930331267 +0000 UTC m=+4829.100792514
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-714305 -n kubernetes-upgrade-714305
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-714305 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-714305 logs -n 25: (1.827527119s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-714305          | kubernetes-upgrade-714305 | jenkins | v1.33.1 | 29 Aug 24 20:15 UTC | 29 Aug 24 20:16 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| pause   | -p pause-427304                       | pause-427304              | jenkins | v1.33.1 | 29 Aug 24 20:15 UTC | 29 Aug 24 20:15 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| unpause | -p pause-427304                       | pause-427304              | jenkins | v1.33.1 | 29 Aug 24 20:15 UTC | 29 Aug 24 20:15 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| pause   | -p pause-427304                       | pause-427304              | jenkins | v1.33.1 | 29 Aug 24 20:15 UTC | 29 Aug 24 20:15 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| delete  | -p pause-427304                       | pause-427304              | jenkins | v1.33.1 | 29 Aug 24 20:15 UTC | 29 Aug 24 20:15 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| delete  | -p pause-427304                       | pause-427304              | jenkins | v1.33.1 | 29 Aug 24 20:16 UTC | 29 Aug 24 20:16 UTC |
	| ssh     | -p kubenet-801672 sudo cat            | kubenet-801672            | jenkins | v1.33.1 | 29 Aug 24 20:16 UTC |                     |
	|         | /etc/nsswitch.conf                    |                           |         |         |                     |                     |
	| ssh     | -p kubenet-801672 sudo cat            | kubenet-801672            | jenkins | v1.33.1 | 29 Aug 24 20:16 UTC |                     |
	|         | /etc/hosts                            |                           |         |         |                     |                     |
	| ssh     | -p kubenet-801672 sudo cat            | kubenet-801672            | jenkins | v1.33.1 | 29 Aug 24 20:16 UTC |                     |
	|         | /etc/resolv.conf                      |                           |         |         |                     |                     |
	| ssh     | -p kubenet-801672 sudo crictl         | kubenet-801672            | jenkins | v1.33.1 | 29 Aug 24 20:16 UTC |                     |
	|         | pods                                  |                           |         |         |                     |                     |
	| ssh     | -p kubenet-801672 sudo crictl         | kubenet-801672            | jenkins | v1.33.1 | 29 Aug 24 20:16 UTC |                     |
	|         | ps --all                              |                           |         |         |                     |                     |
	| ssh     | -p kubenet-801672 sudo find           | kubenet-801672            | jenkins | v1.33.1 | 29 Aug 24 20:16 UTC |                     |
	|         | /etc/cni -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                  |                           |         |         |                     |                     |
	| ssh     | -p kubenet-801672 sudo ip a s         | kubenet-801672            | jenkins | v1.33.1 | 29 Aug 24 20:16 UTC |                     |
	| ssh     | -p kubenet-801672 sudo ip r s         | kubenet-801672            | jenkins | v1.33.1 | 29 Aug 24 20:16 UTC |                     |
	| ssh     | -p kubenet-801672 sudo                | kubenet-801672            | jenkins | v1.33.1 | 29 Aug 24 20:16 UTC |                     |
	|         | iptables-save                         |                           |         |         |                     |                     |
	| ssh     | -p kubenet-801672 sudo                | kubenet-801672            | jenkins | v1.33.1 | 29 Aug 24 20:16 UTC |                     |
	|         | iptables -t nat -L -n -v              |                           |         |         |                     |                     |
	| ssh     | cert-options-323073 ssh               | cert-options-323073       | jenkins | v1.33.1 | 29 Aug 24 20:16 UTC | 29 Aug 24 20:16 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p kubenet-801672 sudo                | kubenet-801672            | jenkins | v1.33.1 | 29 Aug 24 20:16 UTC |                     |
	|         | systemctl status kubelet --all        |                           |         |         |                     |                     |
	|         | --full --no-pager                     |                           |         |         |                     |                     |
	| ssh     | -p kubenet-801672 sudo                | kubenet-801672            | jenkins | v1.33.1 | 29 Aug 24 20:16 UTC |                     |
	|         | systemctl cat kubelet                 |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | -p kubenet-801672 sudo                | kubenet-801672            | jenkins | v1.33.1 | 29 Aug 24 20:16 UTC |                     |
	|         | journalctl -xeu kubelet --all         |                           |         |         |                     |                     |
	|         | --full --no-pager                     |                           |         |         |                     |                     |
	| ssh     | -p cert-options-323073 -- sudo        | cert-options-323073       | jenkins | v1.33.1 | 29 Aug 24 20:16 UTC | 29 Aug 24 20:16 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| ssh     | -p kubenet-801672 sudo cat            | kubenet-801672            | jenkins | v1.33.1 | 29 Aug 24 20:16 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf          |                           |         |         |                     |                     |
	| ssh     | -p kubenet-801672 sudo cat            | kubenet-801672            | jenkins | v1.33.1 | 29 Aug 24 20:16 UTC |                     |
	|         | /var/lib/kubelet/config.yaml          |                           |         |         |                     |                     |
	| ssh     | -p kubenet-801672 sudo                | kubenet-801672            | jenkins | v1.33.1 | 29 Aug 24 20:16 UTC |                     |
	|         | systemctl status docker --all         |                           |         |         |                     |                     |
	|         | --full --no-pager                     |                           |         |         |                     |                     |
	| delete  | -p cert-options-323073                | cert-options-323073       | jenkins | v1.33.1 | 29 Aug 24 20:16 UTC |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 20:15:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 20:15:30.247331   59725 out.go:345] Setting OutFile to fd 1 ...
	I0829 20:15:30.247424   59725 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:15:30.247429   59725 out.go:358] Setting ErrFile to fd 2...
	I0829 20:15:30.247435   59725 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:15:30.247665   59725 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 20:15:30.248214   59725 out.go:352] Setting JSON to false
	I0829 20:15:30.249180   59725 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7077,"bootTime":1724955453,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 20:15:30.249238   59725 start.go:139] virtualization: kvm guest
	I0829 20:15:30.251502   59725 out.go:177] * [kubernetes-upgrade-714305] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 20:15:30.253217   59725 notify.go:220] Checking for updates...
	I0829 20:15:30.253880   59725 out.go:177]   - MINIKUBE_LOCATION=19530
	I0829 20:15:30.255789   59725 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 20:15:30.257292   59725 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:15:30.258715   59725 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 20:15:30.260024   59725 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 20:15:30.261382   59725 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 20:15:30.263316   59725 config.go:182] Loaded profile config "kubernetes-upgrade-714305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:15:30.263949   59725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 20:15:30.264037   59725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:15:30.281198   59725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35749
	I0829 20:15:30.281754   59725 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:15:30.282568   59725 main.go:141] libmachine: Using API Version  1
	I0829 20:15:30.282590   59725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:15:30.282979   59725 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:15:30.283189   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .DriverName
	I0829 20:15:30.283507   59725 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 20:15:30.283920   59725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 20:15:30.283973   59725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:15:30.299453   59725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43459
	I0829 20:15:30.299936   59725 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:15:30.300498   59725 main.go:141] libmachine: Using API Version  1
	I0829 20:15:30.300526   59725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:15:30.300981   59725 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:15:30.301197   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .DriverName
	I0829 20:15:30.341268   59725 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 20:15:30.342665   59725 start.go:297] selected driver: kvm2
	I0829 20:15:30.342685   59725 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-714305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-714305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:15:30.342815   59725 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 20:15:30.343822   59725 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 20:15:30.343928   59725 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19530-11185/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 20:15:30.361081   59725 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 20:15:30.361582   59725 cni.go:84] Creating CNI manager for ""
	I0829 20:15:30.361601   59725 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:15:30.361660   59725 start.go:340] cluster config:
	{Name:kubernetes-upgrade-714305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-714305 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:15:30.361799   59725 iso.go:125] acquiring lock: {Name:mk1c9d3ac7f423dd4657884e37bdf4359f6328d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 20:15:30.363915   59725 out.go:177] * Starting "kubernetes-upgrade-714305" primary control-plane node in "kubernetes-upgrade-714305" cluster
	I0829 20:15:27.525509   59030 main.go:141] libmachine: (pause-427304) Calling .GetIP
	I0829 20:15:27.528613   59030 main.go:141] libmachine: (pause-427304) DBG | domain pause-427304 has defined MAC address 52:54:00:3d:3d:7d in network mk-pause-427304
	I0829 20:15:27.528996   59030 main.go:141] libmachine: (pause-427304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:3d:7d", ip: ""} in network mk-pause-427304: {Iface:virbr4 ExpiryTime:2024-08-29 21:13:33 +0000 UTC Type:0 Mac:52:54:00:3d:3d:7d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:pause-427304 Clientid:01:52:54:00:3d:3d:7d}
	I0829 20:15:27.529030   59030 main.go:141] libmachine: (pause-427304) DBG | domain pause-427304 has defined IP address 192.168.50.229 and MAC address 52:54:00:3d:3d:7d in network mk-pause-427304
	I0829 20:15:27.529308   59030 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0829 20:15:27.535361   59030 kubeadm.go:883] updating cluster {Name:pause-427304 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:pause-427304 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:15:27.535534   59030 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:15:27.535607   59030 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:15:27.598811   59030 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 20:15:27.598831   59030 crio.go:433] Images already preloaded, skipping extraction
	I0829 20:15:27.598889   59030 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:15:27.644506   59030 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 20:15:27.644533   59030 cache_images.go:84] Images are preloaded, skipping loading
	I0829 20:15:27.644542   59030 kubeadm.go:934] updating node { 192.168.50.229 8443 v1.31.0 crio true true} ...
	I0829 20:15:27.644673   59030 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-427304 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.229
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:pause-427304 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:15:27.644753   59030 ssh_runner.go:195] Run: crio config
	I0829 20:15:27.699279   59030 cni.go:84] Creating CNI manager for ""
	I0829 20:15:27.699300   59030 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:15:27.699314   59030 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:15:27.699338   59030 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.229 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-427304 NodeName:pause-427304 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.229 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 20:15:27.699462   59030 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-427304"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.229
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.229"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:15:27.699515   59030 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 20:15:27.710079   59030 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:15:27.710154   59030 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:15:27.720290   59030 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0829 20:15:27.743362   59030 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:15:27.766031   59030 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0829 20:15:27.786949   59030 ssh_runner.go:195] Run: grep 192.168.50.229	control-plane.minikube.internal$ /etc/hosts
	I0829 20:15:27.792456   59030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:15:27.947499   59030 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:15:27.964638   59030 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/pause-427304 for IP: 192.168.50.229
	I0829 20:15:27.964665   59030 certs.go:194] generating shared ca certs ...
	I0829 20:15:27.964696   59030 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:15:27.964867   59030 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:15:27.964949   59030 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:15:27.964966   59030 certs.go:256] generating profile certs ...
	I0829 20:15:27.965069   59030 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/pause-427304/client.key
	I0829 20:15:27.965150   59030 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/pause-427304/apiserver.key.934a0f0a
	I0829 20:15:27.965199   59030 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/pause-427304/proxy-client.key
	I0829 20:15:27.965338   59030 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:15:27.965379   59030 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:15:27.965391   59030 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:15:27.965431   59030 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:15:27.965463   59030 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:15:27.965495   59030 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:15:27.965546   59030 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:15:27.966311   59030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:15:27.995519   59030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:15:28.025178   59030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:15:28.056755   59030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:15:28.083946   59030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/pause-427304/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0829 20:15:28.111969   59030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/pause-427304/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 20:15:28.137032   59030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/pause-427304/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:15:28.168482   59030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/pause-427304/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 20:15:28.197996   59030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:15:28.226072   59030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:15:28.253776   59030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:15:28.280098   59030 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:15:28.297904   59030 ssh_runner.go:195] Run: openssl version
	I0829 20:15:28.304103   59030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:15:28.315027   59030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:15:28.319817   59030 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:15:28.319883   59030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:15:28.325694   59030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:15:28.335593   59030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:15:28.347002   59030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:15:28.352069   59030 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:15:28.352129   59030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:15:28.358585   59030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:15:28.368841   59030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:15:28.380590   59030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:15:28.385430   59030 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:15:28.385492   59030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:15:28.391284   59030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:15:28.400481   59030 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:15:28.405693   59030 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:15:28.411664   59030 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:15:28.417396   59030 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:15:28.423851   59030 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:15:28.429481   59030 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:15:28.435235   59030 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:15:28.440924   59030 kubeadm.go:392] StartCluster: {Name:pause-427304 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:pause-427304 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:15:28.441068   59030 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:15:28.441114   59030 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:15:28.482589   59030 cri.go:89] found id: "8a83a3079642c2e4fd7ad78ff0a98703ddce549995b31413c2bcf3c380506ba2"
	I0829 20:15:28.482613   59030 cri.go:89] found id: "fa8e822fca4ea9a76f7b57c813bbc26a1ec38840438d293352474f5aa4a12884"
	I0829 20:15:28.482618   59030 cri.go:89] found id: "3c94f91219e4aa9910a3a9135ec393361b822a4e304d0cf2d59dca99f671c058"
	I0829 20:15:28.482623   59030 cri.go:89] found id: "ea301729a6ea7cbdb49a56ca49c5ae2a5c82ee98bbd428aebf213a22f8d94365"
	I0829 20:15:28.482627   59030 cri.go:89] found id: "f09695cbc259d95513ffe61dde699108d39ac2cbb7a775019d75e9a6431d503b"
	I0829 20:15:28.482631   59030 cri.go:89] found id: "88a1c9c62831872aca25b5867426b43e4e32fccf535646c549e022b78fadaffc"
	I0829 20:15:28.482635   59030 cri.go:89] found id: ""
	I0829 20:15:28.482681   59030 ssh_runner.go:195] Run: sudo runc list -f json
	I0829 20:15:28.512576   59030 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"078d2151fe88ffe4efcd427dd819d1f19d0f78795ca01550e288b67efeac6031","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/078d2151fe88ffe4efcd427dd819d1f19d0f78795ca01550e288b67efeac6031/userdata","rootfs":"/var/lib/containers/storage/overlay/c9fefe1b6cc8d86b7e942336e9513381d1578c76dff139bab8e7f9caa9b3ec38/merged","created":"2024-08-29T20:13:52.174732343Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"e485deb004208cd0b688e61e58475d33\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.50.229:2379\",\"kubernetes.io/config.seen\":\"2024-08-29T20:13:51.573373151Z\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pode485deb004208cd0b688e61e58475d33","io.kubernetes.cri-o.ContainerID":"078d2151fe88ffe4efcd427d
d819d1f19d0f78795ca01550e288b67efeac6031","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-pause-427304_kube-system_e485deb004208cd0b688e61e58475d33_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-08-29T20:13:52.037130471Z","io.kubernetes.cri-o.HostName":"pause-427304","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/078d2151fe88ffe4efcd427dd819d1f19d0f78795ca01550e288b67efeac6031/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"etcd-pause-427304","io.kubernetes.cri-o.Labels":"{\"component\":\"etcd\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"e485deb004208cd0b688e61e58475d33\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-pause-427304\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-syste
m_etcd-pause-427304_e485deb004208cd0b688e61e58475d33/078d2151fe88ffe4efcd427dd819d1f19d0f78795ca01550e288b67efeac6031.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-427304\",\"uid\":\"e485deb004208cd0b688e61e58475d33\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c9fefe1b6cc8d86b7e942336e9513381d1578c76dff139bab8e7f9caa9b3ec38/merged","io.kubernetes.cri-o.Name":"k8s_etcd-pause-427304_kube-system_e485deb004208cd0b688e61e58475d33_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/078d2151fe88ffe4efcd427dd819d1f19d0f78795ca01550e288b67efeac6031/use
rdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"078d2151fe88ffe4efcd427dd819d1f19d0f78795ca01550e288b67efeac6031","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-427304_kube-system_e485deb004208cd0b688e61e58475d33_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/078d2151fe88ffe4efcd427dd819d1f19d0f78795ca01550e288b67efeac6031/userdata/shm","io.kubernetes.pod.name":"etcd-pause-427304","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"e485deb004208cd0b688e61e58475d33","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.229:2379","kubernetes.io/config.hash":"e485deb004208cd0b688e61e58475d33","kubernetes.io/config.seen":"2024-08-29T20:13:51.573373151Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0e56a7fd1ab96ecf74f0936adff30590e96b3d2382986b2bd7cd23ee104c7c29","pid":0,"status":"stop
ped","bundle":"/run/containers/storage/overlay-containers/0e56a7fd1ab96ecf74f0936adff30590e96b3d2382986b2bd7cd23ee104c7c29/userdata","rootfs":"/var/lib/containers/storage/overlay/21368c2334966ee6bc6c50e315fd5e0fafd98b43357a1ad420d172b797ef70a9/merged","created":"2024-08-29T20:14:03.438389895Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-08-29T20:14:02.708533036Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"1.0.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"7a:ea:08:6c:c8:f5\"},{\"name\":\"vethf2baf68a\",\"mac\":\"f2:b3:62:2e:9e:c0\"},{\"name\":\"eth0\",\"mac\":\"3a:81:00:3b:dd:dd\",\"sandbox\":\"/var/run/netns/5a2cae3d-227d-45dc-90ec-e6b7ddaee965\"}],\"ips\":[{\"interface\":2,\"address\":\"10.244.0.3/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244.0.1\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"/kubep
ods/burstable/pod66abdfbd-c5d8-4753-b6f2-8c6a62504b09","io.kubernetes.cri-o.ContainerID":"0e56a7fd1ab96ecf74f0936adff30590e96b3d2382986b2bd7cd23ee104c7c29","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-6f6b679f8f-2vw8t_kube-system_66abdfbd-c5d8-4753-b6f2-8c6a62504b09_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-08-29T20:14:03.016209021Z","io.kubernetes.cri-o.HostName":"coredns-6f6b679f8f-2vw8t","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/0e56a7fd1ab96ecf74f0936adff30590e96b3d2382986b2bd7cd23ee104c7c29/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"coredns-6f6b679f8f-2vw8t","io.kubernetes.cri-o.Labels":"{\"k8s-app\":\"kube-dns\",\"io.kubernetes.pod.uid\":\"66abdfbd-c5d8-4753-b6f2-8c6a62504b09\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.nam
e\":\"coredns-6f6b679f8f-2vw8t\",\"pod-template-hash\":\"6f6b679f8f\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-6f6b679f8f-2vw8t_66abdfbd-c5d8-4753-b6f2-8c6a62504b09/0e56a7fd1ab96ecf74f0936adff30590e96b3d2382986b2bd7cd23ee104c7c29.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-6f6b679f8f-2vw8t\",\"uid\":\"66abdfbd-c5d8-4753-b6f2-8c6a62504b09\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/21368c2334966ee6bc6c50e315fd5e0fafd98b43357a1ad420d172b797ef70a9/merged","io.kubernetes.cri-o.Name":"k8s_coredns-6f6b679f8f-2vw8t_kube-system_66abdfbd-c5d8-4753-b6f2-8c6a62504b09_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"memory_limit_in_bytes\":178257920,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernete
s.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/0e56a7fd1ab96ecf74f0936adff30590e96b3d2382986b2bd7cd23ee104c7c29/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"0e56a7fd1ab96ecf74f0936adff30590e96b3d2382986b2bd7cd23ee104c7c29","io.kubernetes.cri-o.SandboxName":"k8s_coredns-6f6b679f8f-2vw8t_kube-system_66abdfbd-c5d8-4753-b6f2-8c6a62504b09_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/0e56a7fd1ab96ecf74f0936adff30590e96b3d2382986b2bd7cd23ee104c7c29/userdata/shm","io.kubernetes.pod.name":"coredns-6f6b679f8f-2vw8t","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"66abdfbd-c5d8-4753-b6f2-8c6a62504b09","k8s-app":"kube-dns","kubernetes.io/config.seen":"2024-08-29T20:14:02.708533036Z","kubernetes.io/config.source":"api","pod-template-hash":"6f6b679f8f"},
"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3c94f91219e4aa9910a3a9135ec393361b822a4e304d0cf2d59dca99f671c058","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/3c94f91219e4aa9910a3a9135ec393361b822a4e304d0cf2d59dca99f671c058/userdata","rootfs":"/var/lib/containers/storage/overlay/7cea056a3933b6f5c6e0bc6abded1e7b2722a7bc4dcf9f0a28f09494990b0726/merged","created":"2024-08-29T20:13:52.449908181Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"cdf7d3fa","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"cdf7d3fa\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.termina
tionGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3c94f91219e4aa9910a3a9135ec393361b822a4e304d0cf2d59dca99f671c058","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-29T20:13:52.344239987Z","io.kubernetes.cri-o.Image":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.15-0","io.kubernetes.cri-o.ImageRef":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-427304\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"e485deb004208cd0b688e61e58475d33\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-427304_e485deb004208cd0b688e61e58475d33/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7cea056a3933b6f5c6e0bc6abded1e7b2722a7bc4dcf9f0a28
f09494990b0726/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-427304_kube-system_e485deb004208cd0b688e61e58475d33_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/078d2151fe88ffe4efcd427dd819d1f19d0f78795ca01550e288b67efeac6031/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"078d2151fe88ffe4efcd427dd819d1f19d0f78795ca01550e288b67efeac6031","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-427304_kube-system_e485deb004208cd0b688e61e58475d33_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/e485deb004208cd0b688e61e58475d33/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e485deb004208cd0
b688e61e58475d33/containers/etcd/a52325ab\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-pause-427304","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e485deb004208cd0b688e61e58475d33","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.229:2379","kubernetes.io/config.hash":"e485deb004208cd0b688e61e58475d33","kubernetes.io/config.seen":"2024-08-29T20:13:51.573373151Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"418c4e31a8108a688c7a15687416272e106b5b1b35b9b5029fada4a7f91e391b","pid":0,"status":"stopped","bundle":"/run/con
tainers/storage/overlay-containers/418c4e31a8108a688c7a15687416272e106b5b1b35b9b5029fada4a7f91e391b/userdata","rootfs":"/var/lib/containers/storage/overlay/5db923d3fe7ad6bd8f6831fcce3dadac160aa639adda299b47632aa5aba5d793/merged","created":"2024-08-29T20:14:02.916844232Z","annotations":{"controller-revision-hash":"5976bc5f75","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-08-29T20:14:02.515121796Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/besteffort/podba344be5-d93e-4221-a2c1-95ef5db9b864","io.kubernetes.cri-o.ContainerID":"418c4e31a8108a688c7a15687416272e106b5b1b35b9b5029fada4a7f91e391b","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-sxfn5_kube-system_ba344be5-d93e-4221-a2c1-95ef5db9b864_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-08-29T20:14:02.842216081Z","io.kubernetes.cri-o.HostName":"pause-427304","io.kubernetes.cri
-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/418c4e31a8108a688c7a15687416272e106b5b1b35b9b5029fada4a7f91e391b/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"kube-proxy-sxfn5","io.kubernetes.cri-o.Labels":"{\"pod-template-generation\":\"1\",\"k8s-app\":\"kube-proxy\",\"controller-revision-hash\":\"5976bc5f75\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"ba344be5-d93e-4221-a2c1-95ef5db9b864\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-proxy-sxfn5\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-sxfn5_ba344be5-d93e-4221-a2c1-95ef5db9b864/418c4e31a8108a688c7a15687416272e106b5b1b35b9b5029fada4a7f91e391b.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-sxfn5\",\"uid\":\"ba344be5-d93e-4221-a2c1-95ef5db9b864\",\"namespace\":\"kube-system\"}"
,"io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/5db923d3fe7ad6bd8f6831fcce3dadac160aa639adda299b47632aa5aba5d793/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-sxfn5_kube-system_ba344be5-d93e-4221-a2c1-95ef5db9b864_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":2,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/418c4e31a8108a688c7a15687416272e106b5b1b35b9b5029fada4a7f91e391b/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"418c4e31a8108a688c7a15687416272e106b5b1b35b9b5029fada4a7f91e391b","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-sxfn5_kube-system_ba344be5-d93e-4221-a2c1-95ef5db9b864
_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/418c4e31a8108a688c7a15687416272e106b5b1b35b9b5029fada4a7f91e391b/userdata/shm","io.kubernetes.pod.name":"kube-proxy-sxfn5","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"ba344be5-d93e-4221-a2c1-95ef5db9b864","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2024-08-29T20:14:02.515121796Z","kubernetes.io/config.source":"api","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"831d9b2b6d11983aebc4345a8bd1e8516bffe3273bf1de132c4cc772dfa4f9e8","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/831d9b2b6d11983aebc4345a8bd1e8516bffe3273bf1de132c4cc772dfa4f9e8/userdata","rootfs":"/var/lib/containers/storage/overlay/e3c2ddcbca228ae9e87640ccaaf8569229003ce7c8b257b6dd07c3ed8fac33d5/merged","created":"2024-08-29T20:13:52.142152315Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","
io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"9e7e6c8d5772976fbda01c7474d50951\",\"kubernetes.io/config.seen\":\"2024-08-29T20:13:51.573379397Z\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod9e7e6c8d5772976fbda01c7474d50951","io.kubernetes.cri-o.ContainerID":"831d9b2b6d11983aebc4345a8bd1e8516bffe3273bf1de132c4cc772dfa4f9e8","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-pause-427304_kube-system_9e7e6c8d5772976fbda01c7474d50951_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-08-29T20:13:52.041548595Z","io.kubernetes.cri-o.HostName":"pause-427304","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/831d9b2b6d11983aebc4345a8bd1e8516bffe3273bf1de132c4cc772dfa4f9e8/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.
k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"kube-scheduler-pause-427304","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-scheduler-pause-427304\",\"tier\":\"control-plane\",\"component\":\"kube-scheduler\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"9e7e6c8d5772976fbda01c7474d50951\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-427304_9e7e6c8d5772976fbda01c7474d50951/831d9b2b6d11983aebc4345a8bd1e8516bffe3273bf1de132c4cc772dfa4f9e8.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-pause-427304\",\"uid\":\"9e7e6c8d5772976fbda01c7474d50951\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e3c2ddcbca228ae9e87640ccaaf8569229003ce7c8b257b6dd07c3ed8fac33d5/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-pause-427304_kube-system_9e7e6c8d5772976fbda01c7474d50951_0","io.kubernetes.cri-o.Namespace":"kube-system","io.
kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/831d9b2b6d11983aebc4345a8bd1e8516bffe3273bf1de132c4cc772dfa4f9e8/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"831d9b2b6d11983aebc4345a8bd1e8516bffe3273bf1de132c4cc772dfa4f9e8","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-427304_kube-system_9e7e6c8d5772976fbda01c7474d50951_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/831d9b2b6d11983aebc4345a8bd1e8516bffe3273bf1de132c4cc772dfa4f9e8/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-427304","io.kubernetes.p
od.namespace":"kube-system","io.kubernetes.pod.uid":"9e7e6c8d5772976fbda01c7474d50951","kubernetes.io/config.hash":"9e7e6c8d5772976fbda01c7474d50951","kubernetes.io/config.seen":"2024-08-29T20:13:51.573379397Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"88a1c9c62831872aca25b5867426b43e4e32fccf535646c549e022b78fadaffc","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/88a1c9c62831872aca25b5867426b43e4e32fccf535646c549e022b78fadaffc/userdata","rootfs":"/var/lib/containers/storage/overlay/79cd1f4ade78e5f48367963026974ff0462a1ebb458e50f5ab054bf355379e07/merged","created":"2024-08-29T20:13:52.319389837Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"f8fb4364","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernet
es.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"f8fb4364\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"88a1c9c62831872aca25b5867426b43e4e32fccf535646c549e022b78fadaffc","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-29T20:13:52.209944095Z","io.kubernetes.cri-o.Image":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.31.0","io.kubernetes.cri-o.ImageRef":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-427304\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9e7e6c8d5772976fbda
01c7474d50951\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-427304_9e7e6c8d5772976fbda01c7474d50951/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/79cd1f4ade78e5f48367963026974ff0462a1ebb458e50f5ab054bf355379e07/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-427304_kube-system_9e7e6c8d5772976fbda01c7474d50951_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/831d9b2b6d11983aebc4345a8bd1e8516bffe3273bf1de132c4cc772dfa4f9e8/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"831d9b2b6d11983aebc4345a8bd1e8516bffe3273bf1de132c4cc772dfa4f9e8","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-427304_kube-system_9e7e6c8d5772976fbda01c7474d50951_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.c
ri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/9e7e6c8d5772976fbda01c7474d50951/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/9e7e6c8d5772976fbda01c7474d50951/containers/kube-scheduler/16391496\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-pause-427304","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"9e7e6c8d5772976fbda01c7474d50951","kubernetes.io/config.hash":"9e7e6c8d5772976fbda01c7474d50951","kubernetes.io/config.seen":"2024-08-29T20:13:51.573379397Z","kubernetes.io/config.source":"file"
},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8a83a3079642c2e4fd7ad78ff0a98703ddce549995b31413c2bcf3c380506ba2","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/8a83a3079642c2e4fd7ad78ff0a98703ddce549995b31413c2bcf3c380506ba2/userdata","rootfs":"/var/lib/containers/storage/overlay/2f3d2486f472e79f72048eafb7147e87e5ffd1b240a574f08cb752478f16f75a/merged","created":"2024-08-29T20:14:03.588737833Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e6f52134","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.containe
r.hash\":\"e6f52134\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8a83a3079642c2e4fd7ad78ff0a98703ddce549995b31413c2bcf3c380506ba2","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-29T20:14:03.525439967Z","io.kubernetes.cri-o.IP.0":"10.244.0.3","io.kubernetes.cri-o.Image":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.11.1","io.kubernetes.cri-o.ImageRef":"cbb01a7bd410dc08b
a382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-6f6b679f8f-2vw8t\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"66abdfbd-c5d8-4753-b6f2-8c6a62504b09\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-6f6b679f8f-2vw8t_66abdfbd-c5d8-4753-b6f2-8c6a62504b09/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/2f3d2486f472e79f72048eafb7147e87e5ffd1b240a574f08cb752478f16f75a/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-6f6b679f8f-2vw8t_kube-system_66abdfbd-c5d8-4753-b6f2-8c6a62504b09_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/0e56a7fd1ab96ecf74f0936adff30590e96b3d2382986b2bd7cd23ee104c7c29/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"0e56a7fd1ab96ecf74f0936adff305
90e96b3d2382986b2bd7cd23ee104c7c29","io.kubernetes.cri-o.SandboxName":"k8s_coredns-6f6b679f8f-2vw8t_kube-system_66abdfbd-c5d8-4753-b6f2-8c6a62504b09_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/66abdfbd-c5d8-4753-b6f2-8c6a62504b09/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/66abdfbd-c5d8-4753-b6f2-8c6a62504b09/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/66abdfbd-c5d8-4753-b6f2-8c6a62504b09/containers/coredns/54a197f8\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/
serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/66abdfbd-c5d8-4753-b6f2-8c6a62504b09/volumes/kubernetes.io~projected/kube-api-access-9pkp2\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-6f6b679f8f-2vw8t","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"66abdfbd-c5d8-4753-b6f2-8c6a62504b09","kubernetes.io/config.seen":"2024-08-29T20:14:02.708533036Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9c2f2ec6f8deded7919ac3a88dd4a10cdf5b985518f1d190bbf93a320e1eff66","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9c2f2ec6f8deded7919ac3a88dd4a10cdf5b985518f1d190bbf93a320e1eff66/userdata","rootfs":"/var/lib/containers/storage/overlay/57cfe5ff2c640e092ace114af4fd3d21cca0b83b1f8a48d997d71d81abb74af9/merged","created":"2024-08-29T20:13:52.140564303Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o",
"io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.50.229:8443\",\"kubernetes.io/config.seen\":\"2024-08-29T20:13:51.573377020Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"c01c28270ec196cfa2ea1615e35abbf3\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/podc01c28270ec196cfa2ea1615e35abbf3","io.kubernetes.cri-o.ContainerID":"9c2f2ec6f8deded7919ac3a88dd4a10cdf5b985518f1d190bbf93a320e1eff66","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-427304_kube-system_c01c28270ec196cfa2ea1615e35abbf3_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-08-29T20:13:52.051529566Z","io.kubernetes.cri-o.HostName":"pause-427304","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/9c2f2ec6f8deded7919ac3a88dd4a10cdf5b985518f1d190bbf93a320e1eff66/userdata/hostname","io.ku
bernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-427304","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"c01c28270ec196cfa2ea1615e35abbf3\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-427304\",\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-427304_c01c28270ec196cfa2ea1615e35abbf3/9c2f2ec6f8deded7919ac3a88dd4a10cdf5b985518f1d190bbf93a320e1eff66.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-427304\",\"uid\":\"c01c28270ec196cfa2ea1615e35abbf3\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/57cfe5ff2c640e092ace114af4fd3d21cca0b83b1f8a48d997d71d81abb74af9/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-pause-427304_kube-
system_c01c28270ec196cfa2ea1615e35abbf3_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":256,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/9c2f2ec6f8deded7919ac3a88dd4a10cdf5b985518f1d190bbf93a320e1eff66/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"9c2f2ec6f8deded7919ac3a88dd4a10cdf5b985518f1d190bbf93a320e1eff66","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-427304_kube-system_c01c28270ec196cfa2ea1615e35abbf3_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/9c2f2ec6f8deded7919ac3a88dd4a10cdf5b985518f1d190bbf93a320
e1eff66/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-427304","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"c01c28270ec196cfa2ea1615e35abbf3","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.50.229:8443","kubernetes.io/config.hash":"c01c28270ec196cfa2ea1615e35abbf3","kubernetes.io/config.seen":"2024-08-29T20:13:51.573377020Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bcc34951ad90887280ef11b1ff75bad02ee85efc4dac240f126ef1b5c03f143d","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/bcc34951ad90887280ef11b1ff75bad02ee85efc4dac240f126ef1b5c03f143d/userdata","rootfs":"/var/lib/containers/storage/overlay/a6f0a3a57e8edf68b8ad15d29f7239a890699604dcb32602094aa87fbb7b875a/merged","created":"2024-08-29T20:13:52.141309971Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-
o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-08-29T20:13:51.573378296Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"b3291b09c0422698d1f5e365385be346\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/podb3291b09c0422698d1f5e365385be346","io.kubernetes.cri-o.ContainerID":"bcc34951ad90887280ef11b1ff75bad02ee85efc4dac240f126ef1b5c03f143d","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-pause-427304_kube-system_b3291b09c0422698d1f5e365385be346_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-08-29T20:13:52.04929169Z","io.kubernetes.cri-o.HostName":"pause-427304","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/bcc34951ad90887280ef11b1ff75bad02ee85efc4dac240f126ef1b5c03f143d/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeNam
e":"kube-controller-manager-pause-427304","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"b3291b09c0422698d1f5e365385be346\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-427304\",\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-427304_b3291b09c0422698d1f5e365385be346/bcc34951ad90887280ef11b1ff75bad02ee85efc4dac240f126ef1b5c03f143d.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-pause-427304\",\"uid\":\"b3291b09c0422698d1f5e365385be346\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a6f0a3a57e8edf68b8ad15d29f7239a890699604dcb32602094aa87fbb7b875a/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-427304_kube-system_b3291b09c0422698d1f5e365385be346_0","io.kubernetes.cri-o.Namespace":"kube-syste
m","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":204,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/bcc34951ad90887280ef11b1ff75bad02ee85efc4dac240f126ef1b5c03f143d/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"bcc34951ad90887280ef11b1ff75bad02ee85efc4dac240f126ef1b5c03f143d","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-427304_kube-system_b3291b09c0422698d1f5e365385be346_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/bcc34951ad90887280ef11b1ff75bad02ee85efc4dac240f126ef1b5c03f143d/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-pause
-427304","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"b3291b09c0422698d1f5e365385be346","kubernetes.io/config.hash":"b3291b09c0422698d1f5e365385be346","kubernetes.io/config.seen":"2024-08-29T20:13:51.573378296Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ea301729a6ea7cbdb49a56ca49c5ae2a5c82ee98bbd428aebf213a22f8d94365","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/ea301729a6ea7cbdb49a56ca49c5ae2a5c82ee98bbd428aebf213a22f8d94365/userdata","rootfs":"/var/lib/containers/storage/overlay/b0fc0bcffc6798663521135dd9218426cf856a2810331801f13f9ea556f8c583/merged","created":"2024-08-29T20:13:52.327983902Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"3994b1a4","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.termination
MessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"3994b1a4\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ea301729a6ea7cbdb49a56ca49c5ae2a5c82ee98bbd428aebf213a22f8d94365","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-29T20:13:52.25750636Z","io.kubernetes.cri-o.Image":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.31.0","io.kubernetes.cri-o.ImageRef":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-427304\",\"io.kubernetes.pod.namespace\":\"k
ube-system\",\"io.kubernetes.pod.uid\":\"b3291b09c0422698d1f5e365385be346\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-427304_b3291b09c0422698d1f5e365385be346/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b0fc0bcffc6798663521135dd9218426cf856a2810331801f13f9ea556f8c583/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-427304_kube-system_b3291b09c0422698d1f5e365385be346_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/bcc34951ad90887280ef11b1ff75bad02ee85efc4dac240f126ef1b5c03f143d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"bcc34951ad90887280ef11b1ff75bad02ee85efc4dac240f126ef1b5c03f143d","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-427304_kube-system_b3291b09c0422698d1f5e365385be346
_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b3291b09c0422698d1f5e365385be346/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b3291b09c0422698d1f5e365385be346/containers/kube-controller-manager/35d33edd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificate
s\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-427304","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b3291b09c0422698d1f5e365385be346","kubernetes.io/config.hash":"b3291b09c0422698d1f5e365385be346","kubernetes.io/config.seen":"2024-08-29T20:13:51.573378296Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f09695cbc259d95513ffe61dde699108d39ac2cbb7a775019d75e9a6431d503b","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/f09695cbc259d95513ffe
61dde699108d39ac2cbb7a775019d75e9a6431d503b/userdata","rootfs":"/var/lib/containers/storage/overlay/6eed5a79232cafa0c40da6bf5ba84d0d045fc8f8c31c3d34551329021ff3fc87/merged","created":"2024-08-29T20:13:52.424730408Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"f72d0944","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"f72d0944\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f09695cbc259d95513ffe61dde699108d39ac2cbb7a775019d75e9a6431d503b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created"
:"2024-08-29T20:13:52.240238066Z","io.kubernetes.cri-o.Image":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.31.0","io.kubernetes.cri-o.ImageRef":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-427304\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c01c28270ec196cfa2ea1615e35abbf3\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-427304_c01c28270ec196cfa2ea1615e35abbf3/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6eed5a79232cafa0c40da6bf5ba84d0d045fc8f8c31c3d34551329021ff3fc87/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-427304_kube-system_c01c28270ec196cfa2ea1615e35abbf3_0",
"io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/9c2f2ec6f8deded7919ac3a88dd4a10cdf5b985518f1d190bbf93a320e1eff66/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"9c2f2ec6f8deded7919ac3a88dd4a10cdf5b985518f1d190bbf93a320e1eff66","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-427304_kube-system_c01c28270ec196cfa2ea1615e35abbf3_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c01c28270ec196cfa2ea1615e35abbf3/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c01c28270ec196cfa2ea1615e35abbf3/containers/kube-apiserver/2db701f1\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false
},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-427304","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c01c28270ec196cfa2ea1615e35abbf3","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.50.229:8443","kubernetes.io/config.hash":"c01c28270ec196cfa2ea1615e35abbf3","kubernetes.io/config.seen":"2024-08-29T20:13:51.573377020Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fa8e822fca4ea9a76f7b57c813bbc26a1ec38840438d293352474f5aa4a12884","pi
d":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/fa8e822fca4ea9a76f7b57c813bbc26a1ec38840438d293352474f5aa4a12884/userdata","rootfs":"/var/lib/containers/storage/overlay/011175a37e89cd2bc965ac8ee8d5fd64c699a2da63b5ed444a81e2bd68385eeb/merged","created":"2024-08-29T20:14:03.239968858Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"78ccb3c","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"78ccb3c\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"fa8e822fca4ea9a76f7b57c813bbc26a1ec38840438d2933524
74f5aa4a12884","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-29T20:14:03.023233878Z","io.kubernetes.cri-o.Image":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.31.0","io.kubernetes.cri-o.ImageRef":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-sxfn5\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"ba344be5-d93e-4221-a2c1-95ef5db9b864\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-sxfn5_ba344be5-d93e-4221-a2c1-95ef5db9b864/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/011175a37e89cd2bc965ac8ee8d5fd64c699a2da63b5ed444a81e2bd68385eeb/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-sxfn5_k
ube-system_ba344be5-d93e-4221-a2c1-95ef5db9b864_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/418c4e31a8108a688c7a15687416272e106b5b1b35b9b5029fada4a7f91e391b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"418c4e31a8108a688c7a15687416272e106b5b1b35b9b5029fada4a7f91e391b","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-sxfn5_kube-system_ba344be5-d93e-4221-a2c1-95ef5db9b864_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/ba344be5-d9
3e-4221-a2c1-95ef5db9b864/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/ba344be5-d93e-4221-a2c1-95ef5db9b864/containers/kube-proxy/3d0e940f\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/ba344be5-d93e-4221-a2c1-95ef5db9b864/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/ba344be5-d93e-4221-a2c1-95ef5db9b864/volumes/kubernetes.io~projected/kube-api-access-b9nn9\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-sxfn5","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"ba344be5-d93e-4221-a2c1-95ef5db9b864","kubernetes.i
o/config.seen":"2024-08-29T20:14:02.515121796Z","kubernetes.io/config.source":"api"},"owner":"root"}]
	I0829 20:15:28.513192   59030 cri.go:126] list returned 12 containers
	I0829 20:15:28.513210   59030 cri.go:129] container: {ID:078d2151fe88ffe4efcd427dd819d1f19d0f78795ca01550e288b67efeac6031 Status:stopped}
	I0829 20:15:28.513247   59030 cri.go:131] skipping 078d2151fe88ffe4efcd427dd819d1f19d0f78795ca01550e288b67efeac6031 - not in ps
	I0829 20:15:28.513258   59030 cri.go:129] container: {ID:0e56a7fd1ab96ecf74f0936adff30590e96b3d2382986b2bd7cd23ee104c7c29 Status:stopped}
	I0829 20:15:28.513265   59030 cri.go:131] skipping 0e56a7fd1ab96ecf74f0936adff30590e96b3d2382986b2bd7cd23ee104c7c29 - not in ps
	I0829 20:15:28.513270   59030 cri.go:129] container: {ID:3c94f91219e4aa9910a3a9135ec393361b822a4e304d0cf2d59dca99f671c058 Status:stopped}
	I0829 20:15:28.513281   59030 cri.go:135] skipping {3c94f91219e4aa9910a3a9135ec393361b822a4e304d0cf2d59dca99f671c058 stopped}: state = "stopped", want "paused"
	I0829 20:15:28.513295   59030 cri.go:129] container: {ID:418c4e31a8108a688c7a15687416272e106b5b1b35b9b5029fada4a7f91e391b Status:stopped}
	I0829 20:15:28.513305   59030 cri.go:131] skipping 418c4e31a8108a688c7a15687416272e106b5b1b35b9b5029fada4a7f91e391b - not in ps
	I0829 20:15:28.513312   59030 cri.go:129] container: {ID:831d9b2b6d11983aebc4345a8bd1e8516bffe3273bf1de132c4cc772dfa4f9e8 Status:stopped}
	I0829 20:15:28.513321   59030 cri.go:131] skipping 831d9b2b6d11983aebc4345a8bd1e8516bffe3273bf1de132c4cc772dfa4f9e8 - not in ps
	I0829 20:15:28.513325   59030 cri.go:129] container: {ID:88a1c9c62831872aca25b5867426b43e4e32fccf535646c549e022b78fadaffc Status:stopped}
	I0829 20:15:28.513334   59030 cri.go:135] skipping {88a1c9c62831872aca25b5867426b43e4e32fccf535646c549e022b78fadaffc stopped}: state = "stopped", want "paused"
	I0829 20:15:28.513345   59030 cri.go:129] container: {ID:8a83a3079642c2e4fd7ad78ff0a98703ddce549995b31413c2bcf3c380506ba2 Status:stopped}
	I0829 20:15:28.513356   59030 cri.go:135] skipping {8a83a3079642c2e4fd7ad78ff0a98703ddce549995b31413c2bcf3c380506ba2 stopped}: state = "stopped", want "paused"
	I0829 20:15:28.513365   59030 cri.go:129] container: {ID:9c2f2ec6f8deded7919ac3a88dd4a10cdf5b985518f1d190bbf93a320e1eff66 Status:stopped}
	I0829 20:15:28.513373   59030 cri.go:131] skipping 9c2f2ec6f8deded7919ac3a88dd4a10cdf5b985518f1d190bbf93a320e1eff66 - not in ps
	I0829 20:15:28.513377   59030 cri.go:129] container: {ID:bcc34951ad90887280ef11b1ff75bad02ee85efc4dac240f126ef1b5c03f143d Status:stopped}
	I0829 20:15:28.513388   59030 cri.go:131] skipping bcc34951ad90887280ef11b1ff75bad02ee85efc4dac240f126ef1b5c03f143d - not in ps
	I0829 20:15:28.513395   59030 cri.go:129] container: {ID:ea301729a6ea7cbdb49a56ca49c5ae2a5c82ee98bbd428aebf213a22f8d94365 Status:stopped}
	I0829 20:15:28.513404   59030 cri.go:135] skipping {ea301729a6ea7cbdb49a56ca49c5ae2a5c82ee98bbd428aebf213a22f8d94365 stopped}: state = "stopped", want "paused"
	I0829 20:15:28.513412   59030 cri.go:129] container: {ID:f09695cbc259d95513ffe61dde699108d39ac2cbb7a775019d75e9a6431d503b Status:stopped}
	I0829 20:15:28.513418   59030 cri.go:135] skipping {f09695cbc259d95513ffe61dde699108d39ac2cbb7a775019d75e9a6431d503b stopped}: state = "stopped", want "paused"
	I0829 20:15:28.513427   59030 cri.go:129] container: {ID:fa8e822fca4ea9a76f7b57c813bbc26a1ec38840438d293352474f5aa4a12884 Status:stopped}
	I0829 20:15:28.513434   59030 cri.go:135] skipping {fa8e822fca4ea9a76f7b57c813bbc26a1ec38840438d293352474f5aa4a12884 stopped}: state = "stopped", want "paused"
	I0829 20:15:28.513486   59030 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:15:28.527551   59030 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:15:28.527568   59030 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:15:28.527611   59030 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:15:28.541141   59030 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:15:28.542164   59030 kubeconfig.go:125] found "pause-427304" server: "https://192.168.50.229:8443"
	I0829 20:15:28.543742   59030 kapi.go:59] client config for pause-427304: &rest.Config{Host:"https://192.168.50.229:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19530-11185/.minikube/profiles/pause-427304/client.crt", KeyFile:"/home/jenkins/minikube-integration/19530-11185/.minikube/profiles/pause-427304/client.key", CAFile:"/home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]st
ring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0829 20:15:28.544454   59030 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:15:28.557175   59030 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.229
	I0829 20:15:28.557226   59030 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:15:28.557243   59030 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:15:28.557305   59030 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:15:28.602467   59030 cri.go:89] found id: "8a83a3079642c2e4fd7ad78ff0a98703ddce549995b31413c2bcf3c380506ba2"
	I0829 20:15:28.602487   59030 cri.go:89] found id: "fa8e822fca4ea9a76f7b57c813bbc26a1ec38840438d293352474f5aa4a12884"
	I0829 20:15:28.602490   59030 cri.go:89] found id: "3c94f91219e4aa9910a3a9135ec393361b822a4e304d0cf2d59dca99f671c058"
	I0829 20:15:28.602500   59030 cri.go:89] found id: "ea301729a6ea7cbdb49a56ca49c5ae2a5c82ee98bbd428aebf213a22f8d94365"
	I0829 20:15:28.602503   59030 cri.go:89] found id: "f09695cbc259d95513ffe61dde699108d39ac2cbb7a775019d75e9a6431d503b"
	I0829 20:15:28.602506   59030 cri.go:89] found id: "88a1c9c62831872aca25b5867426b43e4e32fccf535646c549e022b78fadaffc"
	I0829 20:15:28.602509   59030 cri.go:89] found id: ""
	I0829 20:15:28.602514   59030 cri.go:252] Stopping containers: [8a83a3079642c2e4fd7ad78ff0a98703ddce549995b31413c2bcf3c380506ba2 fa8e822fca4ea9a76f7b57c813bbc26a1ec38840438d293352474f5aa4a12884 3c94f91219e4aa9910a3a9135ec393361b822a4e304d0cf2d59dca99f671c058 ea301729a6ea7cbdb49a56ca49c5ae2a5c82ee98bbd428aebf213a22f8d94365 f09695cbc259d95513ffe61dde699108d39ac2cbb7a775019d75e9a6431d503b 88a1c9c62831872aca25b5867426b43e4e32fccf535646c549e022b78fadaffc]
	I0829 20:15:28.602581   59030 ssh_runner.go:195] Run: which crictl
	I0829 20:15:28.607550   59030 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 8a83a3079642c2e4fd7ad78ff0a98703ddce549995b31413c2bcf3c380506ba2 fa8e822fca4ea9a76f7b57c813bbc26a1ec38840438d293352474f5aa4a12884 3c94f91219e4aa9910a3a9135ec393361b822a4e304d0cf2d59dca99f671c058 ea301729a6ea7cbdb49a56ca49c5ae2a5c82ee98bbd428aebf213a22f8d94365 f09695cbc259d95513ffe61dde699108d39ac2cbb7a775019d75e9a6431d503b 88a1c9c62831872aca25b5867426b43e4e32fccf535646c549e022b78fadaffc
	I0829 20:15:28.685510   59030 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:15:28.736345   59030 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:15:28.746985   59030 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Aug 29 20:13 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Aug 29 20:13 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Aug 29 20:13 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Aug 29 20:13 /etc/kubernetes/scheduler.conf
	
	I0829 20:15:28.747051   59030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:15:28.756882   59030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:15:28.766634   59030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:15:28.776105   59030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:15:28.776159   59030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:15:28.785168   59030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:15:28.794515   59030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:15:28.794611   59030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:15:28.805327   59030 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:15:28.814950   59030 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:15:28.881226   59030 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:15:30.016170   59030 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.134909879s)
	I0829 20:15:30.016211   59030 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:15:30.282526   59030 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:15:30.374977   59030 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:15:30.488144   59030 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:15:30.488234   59030 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:15:30.989233   59030 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:15:31.488936   59030 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:15:31.503315   59030 api_server.go:72] duration metric: took 1.015169764s to wait for apiserver process to appear ...
	I0829 20:15:31.503343   59030 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:15:31.503364   59030 api_server.go:253] Checking apiserver healthz at https://192.168.50.229:8443/healthz ...
	I0829 20:15:31.503825   59030 api_server.go:269] stopped: https://192.168.50.229:8443/healthz: Get "https://192.168.50.229:8443/healthz": dial tcp 192.168.50.229:8443: connect: connection refused
	I0829 20:15:27.161317   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:27.161825   59477 main.go:141] libmachine: (cert-options-323073) DBG | unable to find current IP address of domain cert-options-323073 in network mk-cert-options-323073
	I0829 20:15:27.161846   59477 main.go:141] libmachine: (cert-options-323073) DBG | I0829 20:15:27.161778   59529 retry.go:31] will retry after 1.022984129s: waiting for machine to come up
	I0829 20:15:28.186666   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:28.187171   59477 main.go:141] libmachine: (cert-options-323073) DBG | unable to find current IP address of domain cert-options-323073 in network mk-cert-options-323073
	I0829 20:15:28.187202   59477 main.go:141] libmachine: (cert-options-323073) DBG | I0829 20:15:28.187143   59529 retry.go:31] will retry after 1.020021032s: waiting for machine to come up
	I0829 20:15:29.208815   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:29.209294   59477 main.go:141] libmachine: (cert-options-323073) DBG | unable to find current IP address of domain cert-options-323073 in network mk-cert-options-323073
	I0829 20:15:29.209327   59477 main.go:141] libmachine: (cert-options-323073) DBG | I0829 20:15:29.209258   59529 retry.go:31] will retry after 1.574515814s: waiting for machine to come up
	I0829 20:15:30.784938   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:30.785349   59477 main.go:141] libmachine: (cert-options-323073) DBG | unable to find current IP address of domain cert-options-323073 in network mk-cert-options-323073
	I0829 20:15:30.785382   59477 main.go:141] libmachine: (cert-options-323073) DBG | I0829 20:15:30.785336   59529 retry.go:31] will retry after 2.146460463s: waiting for machine to come up
	I0829 20:15:30.365356   59725 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:15:30.365421   59725 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 20:15:30.365433   59725 cache.go:56] Caching tarball of preloaded images
	I0829 20:15:30.365541   59725 preload.go:172] Found /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 20:15:30.365554   59725 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 20:15:30.365679   59725 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/config.json ...
	I0829 20:15:30.365943   59725 start.go:360] acquireMachinesLock for kubernetes-upgrade-714305: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 20:15:32.003518   59030 api_server.go:253] Checking apiserver healthz at https://192.168.50.229:8443/healthz ...
	I0829 20:15:34.354386   59030 api_server.go:279] https://192.168.50.229:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:15:34.354423   59030 api_server.go:103] status: https://192.168.50.229:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:15:34.354441   59030 api_server.go:253] Checking apiserver healthz at https://192.168.50.229:8443/healthz ...
	I0829 20:15:34.417034   59030 api_server.go:279] https://192.168.50.229:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:15:34.417067   59030 api_server.go:103] status: https://192.168.50.229:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:15:34.504173   59030 api_server.go:253] Checking apiserver healthz at https://192.168.50.229:8443/healthz ...
	I0829 20:15:34.520445   59030 api_server.go:279] https://192.168.50.229:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:15:34.520482   59030 api_server.go:103] status: https://192.168.50.229:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:15:35.004262   59030 api_server.go:253] Checking apiserver healthz at https://192.168.50.229:8443/healthz ...
	I0829 20:15:35.018101   59030 api_server.go:279] https://192.168.50.229:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:15:35.018303   59030 api_server.go:103] status: https://192.168.50.229:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:15:35.503527   59030 api_server.go:253] Checking apiserver healthz at https://192.168.50.229:8443/healthz ...
	I0829 20:15:35.513708   59030 api_server.go:279] https://192.168.50.229:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:15:35.513736   59030 api_server.go:103] status: https://192.168.50.229:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:15:36.004439   59030 api_server.go:253] Checking apiserver healthz at https://192.168.50.229:8443/healthz ...
	I0829 20:15:36.008917   59030 api_server.go:279] https://192.168.50.229:8443/healthz returned 200:
	ok
	I0829 20:15:36.016011   59030 api_server.go:141] control plane version: v1.31.0
	I0829 20:15:36.016042   59030 api_server.go:131] duration metric: took 4.512691439s to wait for apiserver health ...
	I0829 20:15:36.016052   59030 cni.go:84] Creating CNI manager for ""
	I0829 20:15:36.016061   59030 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:15:36.018084   59030 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:15:36.019564   59030 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:15:36.032037   59030 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:15:36.050362   59030 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:15:36.050440   59030 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0829 20:15:36.050460   59030 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0829 20:15:36.062874   59030 system_pods.go:59] 6 kube-system pods found
	I0829 20:15:36.062911   59030 system_pods.go:61] "coredns-6f6b679f8f-2vw8t" [66abdfbd-c5d8-4753-b6f2-8c6a62504b09] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:15:36.062922   59030 system_pods.go:61] "etcd-pause-427304" [670ae940-1a17-48f3-82ef-12d93d949f0b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 20:15:36.062932   59030 system_pods.go:61] "kube-apiserver-pause-427304" [11ac4e41-9614-4187-bfed-82b255e968d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 20:15:36.062941   59030 system_pods.go:61] "kube-controller-manager-pause-427304" [673d5c06-1d07-436a-8c20-427f31543d16] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 20:15:36.062951   59030 system_pods.go:61] "kube-proxy-sxfn5" [ba344be5-d93e-4221-a2c1-95ef5db9b864] Running
	I0829 20:15:36.062961   59030 system_pods.go:61] "kube-scheduler-pause-427304" [01b9aeec-bf9d-4e4b-bbf0-c39e33bdc239] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 20:15:36.062968   59030 system_pods.go:74] duration metric: took 12.586877ms to wait for pod list to return data ...
	I0829 20:15:36.062981   59030 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:15:36.068937   59030 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:15:36.068966   59030 node_conditions.go:123] node cpu capacity is 2
	I0829 20:15:36.068977   59030 node_conditions.go:105] duration metric: took 5.991107ms to run NodePressure ...
	I0829 20:15:36.068994   59030 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:15:36.329619   59030 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 20:15:36.333925   59030 kubeadm.go:739] kubelet initialised
	I0829 20:15:36.333950   59030 kubeadm.go:740] duration metric: took 4.298857ms waiting for restarted kubelet to initialise ...
	I0829 20:15:36.333961   59030 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:15:36.338323   59030 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-2vw8t" in "kube-system" namespace to be "Ready" ...
	I0829 20:15:32.933943   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:32.934454   59477 main.go:141] libmachine: (cert-options-323073) DBG | unable to find current IP address of domain cert-options-323073 in network mk-cert-options-323073
	I0829 20:15:32.934505   59477 main.go:141] libmachine: (cert-options-323073) DBG | I0829 20:15:32.934429   59529 retry.go:31] will retry after 2.700967343s: waiting for machine to come up
	I0829 20:15:35.637195   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:35.637573   59477 main.go:141] libmachine: (cert-options-323073) DBG | unable to find current IP address of domain cert-options-323073 in network mk-cert-options-323073
	I0829 20:15:35.637596   59477 main.go:141] libmachine: (cert-options-323073) DBG | I0829 20:15:35.637559   59529 retry.go:31] will retry after 3.314233361s: waiting for machine to come up
	I0829 20:15:38.349032   59030 pod_ready.go:103] pod "coredns-6f6b679f8f-2vw8t" in "kube-system" namespace has status "Ready":"False"
	I0829 20:15:40.844894   59030 pod_ready.go:103] pod "coredns-6f6b679f8f-2vw8t" in "kube-system" namespace has status "Ready":"False"
	I0829 20:15:38.952967   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:38.953453   59477 main.go:141] libmachine: (cert-options-323073) DBG | unable to find current IP address of domain cert-options-323073 in network mk-cert-options-323073
	I0829 20:15:38.953468   59477 main.go:141] libmachine: (cert-options-323073) DBG | I0829 20:15:38.953406   59529 retry.go:31] will retry after 4.430424333s: waiting for machine to come up
	I0829 20:15:44.875767   59725 start.go:364] duration metric: took 14.509789803s to acquireMachinesLock for "kubernetes-upgrade-714305"
	I0829 20:15:44.875823   59725 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:15:44.875831   59725 fix.go:54] fixHost starting: 
	I0829 20:15:44.876153   59725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 20:15:44.876197   59725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:15:44.894361   59725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36171
	I0829 20:15:44.894788   59725 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:15:44.895273   59725 main.go:141] libmachine: Using API Version  1
	I0829 20:15:44.895305   59725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:15:44.895659   59725 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:15:44.895870   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .DriverName
	I0829 20:15:44.896040   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetState
	I0829 20:15:44.897553   59725 fix.go:112] recreateIfNeeded on kubernetes-upgrade-714305: state=Running err=<nil>
	W0829 20:15:44.897571   59725 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:15:44.899479   59725 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-714305" VM ...
	I0829 20:15:44.900977   59725 machine.go:93] provisionDockerMachine start ...
	I0829 20:15:44.901001   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .DriverName
	I0829 20:15:44.901231   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHHostname
	I0829 20:15:44.903732   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:15:44.904119   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:15:07 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:15:44.904150   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:15:44.904290   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHPort
	I0829 20:15:44.904490   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:15:44.904642   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:15:44.904783   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHUsername
	I0829 20:15:44.904929   59725 main.go:141] libmachine: Using SSH client type: native
	I0829 20:15:44.905180   59725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0829 20:15:44.905196   59725 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:15:45.011554   59725 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-714305
	
	I0829 20:15:45.011585   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetMachineName
	I0829 20:15:45.011840   59725 buildroot.go:166] provisioning hostname "kubernetes-upgrade-714305"
	I0829 20:15:45.011869   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetMachineName
	I0829 20:15:45.012066   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHHostname
	I0829 20:15:45.014877   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:15:45.015328   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:15:07 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:15:45.015353   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:15:45.015531   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHPort
	I0829 20:15:45.015695   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:15:45.015847   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:15:45.016015   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHUsername
	I0829 20:15:45.016247   59725 main.go:141] libmachine: Using SSH client type: native
	I0829 20:15:45.016433   59725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0829 20:15:45.016446   59725 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-714305 && echo "kubernetes-upgrade-714305" | sudo tee /etc/hostname
	I0829 20:15:45.136949   59725 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-714305
	
	I0829 20:15:45.136977   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHHostname
	I0829 20:15:45.139917   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:15:45.140313   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:15:07 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:15:45.140339   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:15:45.140463   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHPort
	I0829 20:15:45.140653   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:15:45.140794   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:15:45.140945   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHUsername
	I0829 20:15:45.141113   59725 main.go:141] libmachine: Using SSH client type: native
	I0829 20:15:45.141300   59725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0829 20:15:45.141324   59725 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-714305' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-714305/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-714305' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:15:45.243543   59725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:15:45.243572   59725 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:15:45.243615   59725 buildroot.go:174] setting up certificates
	I0829 20:15:45.243633   59725 provision.go:84] configureAuth start
	I0829 20:15:45.243650   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetMachineName
	I0829 20:15:45.243943   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetIP
	I0829 20:15:45.246928   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:15:45.247351   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:15:07 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:15:45.247395   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:15:43.385728   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:43.386156   59477 main.go:141] libmachine: (cert-options-323073) Found IP for machine: 192.168.83.47
	I0829 20:15:43.386170   59477 main.go:141] libmachine: (cert-options-323073) Reserving static IP address...
	I0829 20:15:43.386190   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has current primary IP address 192.168.83.47 and MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:43.386590   59477 main.go:141] libmachine: (cert-options-323073) DBG | unable to find host DHCP lease matching {name: "cert-options-323073", mac: "52:54:00:b0:5e:2d", ip: "192.168.83.47"} in network mk-cert-options-323073
	I0829 20:15:43.462069   59477 main.go:141] libmachine: (cert-options-323073) DBG | Getting to WaitForSSH function...
	I0829 20:15:43.462110   59477 main.go:141] libmachine: (cert-options-323073) Reserved static IP address: 192.168.83.47
	I0829 20:15:43.462146   59477 main.go:141] libmachine: (cert-options-323073) Waiting for SSH to be available...
	I0829 20:15:43.464512   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:43.464832   59477 main.go:141] libmachine: (cert-options-323073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:5e:2d", ip: ""} in network mk-cert-options-323073: {Iface:virbr3 ExpiryTime:2024-08-29 21:15:38 +0000 UTC Type:0 Mac:52:54:00:b0:5e:2d Iaid: IPaddr:192.168.83.47 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b0:5e:2d}
	I0829 20:15:43.464849   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined IP address 192.168.83.47 and MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:43.465008   59477 main.go:141] libmachine: (cert-options-323073) DBG | Using SSH client type: external
	I0829 20:15:43.465022   59477 main.go:141] libmachine: (cert-options-323073) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/cert-options-323073/id_rsa (-rw-------)
	I0829 20:15:43.465040   59477 main.go:141] libmachine: (cert-options-323073) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.47 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/cert-options-323073/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:15:43.465045   59477 main.go:141] libmachine: (cert-options-323073) DBG | About to run SSH command:
	I0829 20:15:43.465054   59477 main.go:141] libmachine: (cert-options-323073) DBG | exit 0
	I0829 20:15:43.598793   59477 main.go:141] libmachine: (cert-options-323073) DBG | SSH cmd err, output: <nil>: 
	I0829 20:15:43.599097   59477 main.go:141] libmachine: (cert-options-323073) KVM machine creation complete!
	I0829 20:15:43.599398   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetConfigRaw
	I0829 20:15:43.599917   59477 main.go:141] libmachine: (cert-options-323073) Calling .DriverName
	I0829 20:15:43.600126   59477 main.go:141] libmachine: (cert-options-323073) Calling .DriverName
	I0829 20:15:43.600279   59477 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0829 20:15:43.600286   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetState
	I0829 20:15:43.601432   59477 main.go:141] libmachine: Detecting operating system of created instance...
	I0829 20:15:43.601439   59477 main.go:141] libmachine: Waiting for SSH to be available...
	I0829 20:15:43.601443   59477 main.go:141] libmachine: Getting to WaitForSSH function...
	I0829 20:15:43.601448   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHHostname
	I0829 20:15:43.604042   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:43.604408   59477 main.go:141] libmachine: (cert-options-323073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:5e:2d", ip: ""} in network mk-cert-options-323073: {Iface:virbr3 ExpiryTime:2024-08-29 21:15:38 +0000 UTC Type:0 Mac:52:54:00:b0:5e:2d Iaid: IPaddr:192.168.83.47 Prefix:24 Hostname:cert-options-323073 Clientid:01:52:54:00:b0:5e:2d}
	I0829 20:15:43.604417   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined IP address 192.168.83.47 and MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:43.604554   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHPort
	I0829 20:15:43.604738   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHKeyPath
	I0829 20:15:43.604900   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHKeyPath
	I0829 20:15:43.605066   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHUsername
	I0829 20:15:43.605233   59477 main.go:141] libmachine: Using SSH client type: native
	I0829 20:15:43.605422   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.83.47 22 <nil> <nil>}
	I0829 20:15:43.605426   59477 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0829 20:15:43.718012   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:15:43.718023   59477 main.go:141] libmachine: Detecting the provisioner...
	I0829 20:15:43.718029   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHHostname
	I0829 20:15:43.721002   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:43.721287   59477 main.go:141] libmachine: (cert-options-323073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:5e:2d", ip: ""} in network mk-cert-options-323073: {Iface:virbr3 ExpiryTime:2024-08-29 21:15:38 +0000 UTC Type:0 Mac:52:54:00:b0:5e:2d Iaid: IPaddr:192.168.83.47 Prefix:24 Hostname:cert-options-323073 Clientid:01:52:54:00:b0:5e:2d}
	I0829 20:15:43.721312   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined IP address 192.168.83.47 and MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:43.721461   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHPort
	I0829 20:15:43.721659   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHKeyPath
	I0829 20:15:43.721803   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHKeyPath
	I0829 20:15:43.721940   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHUsername
	I0829 20:15:43.722141   59477 main.go:141] libmachine: Using SSH client type: native
	I0829 20:15:43.722299   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.83.47 22 <nil> <nil>}
	I0829 20:15:43.722304   59477 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0829 20:15:43.835506   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0829 20:15:43.835568   59477 main.go:141] libmachine: found compatible host: buildroot
	I0829 20:15:43.835573   59477 main.go:141] libmachine: Provisioning with buildroot...
	I0829 20:15:43.835580   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetMachineName
	I0829 20:15:43.835829   59477 buildroot.go:166] provisioning hostname "cert-options-323073"
	I0829 20:15:43.835845   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetMachineName
	I0829 20:15:43.836036   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHHostname
	I0829 20:15:43.838759   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:43.839136   59477 main.go:141] libmachine: (cert-options-323073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:5e:2d", ip: ""} in network mk-cert-options-323073: {Iface:virbr3 ExpiryTime:2024-08-29 21:15:38 +0000 UTC Type:0 Mac:52:54:00:b0:5e:2d Iaid: IPaddr:192.168.83.47 Prefix:24 Hostname:cert-options-323073 Clientid:01:52:54:00:b0:5e:2d}
	I0829 20:15:43.839153   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined IP address 192.168.83.47 and MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:43.839257   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHPort
	I0829 20:15:43.839432   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHKeyPath
	I0829 20:15:43.839555   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHKeyPath
	I0829 20:15:43.839656   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHUsername
	I0829 20:15:43.839755   59477 main.go:141] libmachine: Using SSH client type: native
	I0829 20:15:43.839951   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.83.47 22 <nil> <nil>}
	I0829 20:15:43.839958   59477 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-options-323073 && echo "cert-options-323073" | sudo tee /etc/hostname
	I0829 20:15:43.969447   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-options-323073
	
	I0829 20:15:43.969461   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHHostname
	I0829 20:15:43.972401   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:43.972741   59477 main.go:141] libmachine: (cert-options-323073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:5e:2d", ip: ""} in network mk-cert-options-323073: {Iface:virbr3 ExpiryTime:2024-08-29 21:15:38 +0000 UTC Type:0 Mac:52:54:00:b0:5e:2d Iaid: IPaddr:192.168.83.47 Prefix:24 Hostname:cert-options-323073 Clientid:01:52:54:00:b0:5e:2d}
	I0829 20:15:43.972768   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined IP address 192.168.83.47 and MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:43.972952   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHPort
	I0829 20:15:43.973126   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHKeyPath
	I0829 20:15:43.973261   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHKeyPath
	I0829 20:15:43.973362   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHUsername
	I0829 20:15:43.973514   59477 main.go:141] libmachine: Using SSH client type: native
	I0829 20:15:43.973681   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.83.47 22 <nil> <nil>}
	I0829 20:15:43.973692   59477 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-options-323073' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-options-323073/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-options-323073' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:15:44.091526   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:15:44.091546   59477 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:15:44.091598   59477 buildroot.go:174] setting up certificates
	I0829 20:15:44.091610   59477 provision.go:84] configureAuth start
	I0829 20:15:44.091620   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetMachineName
	I0829 20:15:44.091908   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetIP
	I0829 20:15:44.094792   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:44.095149   59477 main.go:141] libmachine: (cert-options-323073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:5e:2d", ip: ""} in network mk-cert-options-323073: {Iface:virbr3 ExpiryTime:2024-08-29 21:15:38 +0000 UTC Type:0 Mac:52:54:00:b0:5e:2d Iaid: IPaddr:192.168.83.47 Prefix:24 Hostname:cert-options-323073 Clientid:01:52:54:00:b0:5e:2d}
	I0829 20:15:44.095170   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined IP address 192.168.83.47 and MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:44.095386   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHHostname
	I0829 20:15:44.097554   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:44.097835   59477 main.go:141] libmachine: (cert-options-323073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:5e:2d", ip: ""} in network mk-cert-options-323073: {Iface:virbr3 ExpiryTime:2024-08-29 21:15:38 +0000 UTC Type:0 Mac:52:54:00:b0:5e:2d Iaid: IPaddr:192.168.83.47 Prefix:24 Hostname:cert-options-323073 Clientid:01:52:54:00:b0:5e:2d}
	I0829 20:15:44.097854   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined IP address 192.168.83.47 and MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:44.098057   59477 provision.go:143] copyHostCerts
	I0829 20:15:44.098103   59477 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:15:44.098114   59477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:15:44.098172   59477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:15:44.098297   59477 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:15:44.098303   59477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:15:44.098334   59477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:15:44.098400   59477 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:15:44.098403   59477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:15:44.098421   59477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:15:44.098496   59477 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.cert-options-323073 san=[127.0.0.1 192.168.83.47 cert-options-323073 localhost minikube]
	I0829 20:15:44.193037   59477 provision.go:177] copyRemoteCerts
	I0829 20:15:44.193075   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:15:44.193094   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHHostname
	I0829 20:15:44.195920   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:44.196280   59477 main.go:141] libmachine: (cert-options-323073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:5e:2d", ip: ""} in network mk-cert-options-323073: {Iface:virbr3 ExpiryTime:2024-08-29 21:15:38 +0000 UTC Type:0 Mac:52:54:00:b0:5e:2d Iaid: IPaddr:192.168.83.47 Prefix:24 Hostname:cert-options-323073 Clientid:01:52:54:00:b0:5e:2d}
	I0829 20:15:44.196312   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined IP address 192.168.83.47 and MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:44.196508   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHPort
	I0829 20:15:44.196694   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHKeyPath
	I0829 20:15:44.196821   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHUsername
	I0829 20:15:44.196942   59477 sshutil.go:53] new ssh client: &{IP:192.168.83.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/cert-options-323073/id_rsa Username:docker}
	I0829 20:15:44.285870   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:15:44.314301   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0829 20:15:44.342270   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 20:15:44.368507   59477 provision.go:87] duration metric: took 276.886439ms to configureAuth
	I0829 20:15:44.368532   59477 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:15:44.368742   59477 config.go:182] Loaded profile config "cert-options-323073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:15:44.368827   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHHostname
	I0829 20:15:44.371592   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:44.371988   59477 main.go:141] libmachine: (cert-options-323073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:5e:2d", ip: ""} in network mk-cert-options-323073: {Iface:virbr3 ExpiryTime:2024-08-29 21:15:38 +0000 UTC Type:0 Mac:52:54:00:b0:5e:2d Iaid: IPaddr:192.168.83.47 Prefix:24 Hostname:cert-options-323073 Clientid:01:52:54:00:b0:5e:2d}
	I0829 20:15:44.372011   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined IP address 192.168.83.47 and MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:44.372217   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHPort
	I0829 20:15:44.372390   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHKeyPath
	I0829 20:15:44.372578   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHKeyPath
	I0829 20:15:44.372722   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHUsername
	I0829 20:15:44.372920   59477 main.go:141] libmachine: Using SSH client type: native
	I0829 20:15:44.373089   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.83.47 22 <nil> <nil>}
	I0829 20:15:44.373098   59477 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:15:44.616225   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:15:44.616239   59477 main.go:141] libmachine: Checking connection to Docker...
	I0829 20:15:44.616245   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetURL
	I0829 20:15:44.617731   59477 main.go:141] libmachine: (cert-options-323073) DBG | Using libvirt version 6000000
	I0829 20:15:44.619943   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:44.620312   59477 main.go:141] libmachine: (cert-options-323073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:5e:2d", ip: ""} in network mk-cert-options-323073: {Iface:virbr3 ExpiryTime:2024-08-29 21:15:38 +0000 UTC Type:0 Mac:52:54:00:b0:5e:2d Iaid: IPaddr:192.168.83.47 Prefix:24 Hostname:cert-options-323073 Clientid:01:52:54:00:b0:5e:2d}
	I0829 20:15:44.620335   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined IP address 192.168.83.47 and MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:44.620442   59477 main.go:141] libmachine: Docker is up and running!
	I0829 20:15:44.620449   59477 main.go:141] libmachine: Reticulating splines...
	I0829 20:15:44.620454   59477 client.go:171] duration metric: took 22.930312767s to LocalClient.Create
	I0829 20:15:44.620479   59477 start.go:167] duration metric: took 22.930374833s to libmachine.API.Create "cert-options-323073"
	I0829 20:15:44.620485   59477 start.go:293] postStartSetup for "cert-options-323073" (driver="kvm2")
	I0829 20:15:44.620493   59477 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:15:44.620505   59477 main.go:141] libmachine: (cert-options-323073) Calling .DriverName
	I0829 20:15:44.620725   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:15:44.620741   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHHostname
	I0829 20:15:44.622685   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:44.623184   59477 main.go:141] libmachine: (cert-options-323073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:5e:2d", ip: ""} in network mk-cert-options-323073: {Iface:virbr3 ExpiryTime:2024-08-29 21:15:38 +0000 UTC Type:0 Mac:52:54:00:b0:5e:2d Iaid: IPaddr:192.168.83.47 Prefix:24 Hostname:cert-options-323073 Clientid:01:52:54:00:b0:5e:2d}
	I0829 20:15:44.623217   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined IP address 192.168.83.47 and MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:44.623319   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHPort
	I0829 20:15:44.623471   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHKeyPath
	I0829 20:15:44.623612   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHUsername
	I0829 20:15:44.623708   59477 sshutil.go:53] new ssh client: &{IP:192.168.83.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/cert-options-323073/id_rsa Username:docker}
	I0829 20:15:44.709638   59477 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:15:44.714155   59477 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:15:44.714172   59477 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:15:44.714246   59477 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:15:44.714319   59477 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:15:44.714400   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:15:44.724207   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:15:44.753103   59477 start.go:296] duration metric: took 132.608103ms for postStartSetup
	I0829 20:15:44.753139   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetConfigRaw
	I0829 20:15:44.753745   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetIP
	I0829 20:15:44.756896   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:44.757172   59477 main.go:141] libmachine: (cert-options-323073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:5e:2d", ip: ""} in network mk-cert-options-323073: {Iface:virbr3 ExpiryTime:2024-08-29 21:15:38 +0000 UTC Type:0 Mac:52:54:00:b0:5e:2d Iaid: IPaddr:192.168.83.47 Prefix:24 Hostname:cert-options-323073 Clientid:01:52:54:00:b0:5e:2d}
	I0829 20:15:44.757191   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined IP address 192.168.83.47 and MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:44.757475   59477 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/config.json ...
	I0829 20:15:44.757649   59477 start.go:128] duration metric: took 23.365913148s to createHost
	I0829 20:15:44.757664   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHHostname
	I0829 20:15:44.759746   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:44.760007   59477 main.go:141] libmachine: (cert-options-323073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:5e:2d", ip: ""} in network mk-cert-options-323073: {Iface:virbr3 ExpiryTime:2024-08-29 21:15:38 +0000 UTC Type:0 Mac:52:54:00:b0:5e:2d Iaid: IPaddr:192.168.83.47 Prefix:24 Hostname:cert-options-323073 Clientid:01:52:54:00:b0:5e:2d}
	I0829 20:15:44.760040   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined IP address 192.168.83.47 and MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:44.760146   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHPort
	I0829 20:15:44.760308   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHKeyPath
	I0829 20:15:44.760424   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHKeyPath
	I0829 20:15:44.760540   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHUsername
	I0829 20:15:44.760680   59477 main.go:141] libmachine: Using SSH client type: native
	I0829 20:15:44.760856   59477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.83.47 22 <nil> <nil>}
	I0829 20:15:44.760863   59477 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:15:44.875662   59477 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724962544.834250480
	
	I0829 20:15:44.875674   59477 fix.go:216] guest clock: 1724962544.834250480
	I0829 20:15:44.875679   59477 fix.go:229] Guest: 2024-08-29 20:15:44.83425048 +0000 UTC Remote: 2024-08-29 20:15:44.757654647 +0000 UTC m=+28.138285732 (delta=76.595833ms)
	I0829 20:15:44.875706   59477 fix.go:200] guest clock delta is within tolerance: 76.595833ms
	I0829 20:15:44.875710   59477 start.go:83] releasing machines lock for "cert-options-323073", held for 23.484174967s
	I0829 20:15:44.875732   59477 main.go:141] libmachine: (cert-options-323073) Calling .DriverName
	I0829 20:15:44.875993   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetIP
	I0829 20:15:44.878983   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:44.879398   59477 main.go:141] libmachine: (cert-options-323073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:5e:2d", ip: ""} in network mk-cert-options-323073: {Iface:virbr3 ExpiryTime:2024-08-29 21:15:38 +0000 UTC Type:0 Mac:52:54:00:b0:5e:2d Iaid: IPaddr:192.168.83.47 Prefix:24 Hostname:cert-options-323073 Clientid:01:52:54:00:b0:5e:2d}
	I0829 20:15:44.879415   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined IP address 192.168.83.47 and MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:44.879625   59477 main.go:141] libmachine: (cert-options-323073) Calling .DriverName
	I0829 20:15:44.880198   59477 main.go:141] libmachine: (cert-options-323073) Calling .DriverName
	I0829 20:15:44.880394   59477 main.go:141] libmachine: (cert-options-323073) Calling .DriverName
	I0829 20:15:44.880496   59477 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:15:44.880527   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHHostname
	I0829 20:15:44.880568   59477 ssh_runner.go:195] Run: cat /version.json
	I0829 20:15:44.880583   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHHostname
	I0829 20:15:44.883432   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:44.883697   59477 main.go:141] libmachine: (cert-options-323073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:5e:2d", ip: ""} in network mk-cert-options-323073: {Iface:virbr3 ExpiryTime:2024-08-29 21:15:38 +0000 UTC Type:0 Mac:52:54:00:b0:5e:2d Iaid: IPaddr:192.168.83.47 Prefix:24 Hostname:cert-options-323073 Clientid:01:52:54:00:b0:5e:2d}
	I0829 20:15:44.883722   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined IP address 192.168.83.47 and MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:44.883879   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:44.883905   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHPort
	I0829 20:15:44.884104   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHKeyPath
	I0829 20:15:44.884228   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHUsername
	I0829 20:15:44.884369   59477 sshutil.go:53] new ssh client: &{IP:192.168.83.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/cert-options-323073/id_rsa Username:docker}
	I0829 20:15:44.884427   59477 main.go:141] libmachine: (cert-options-323073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:5e:2d", ip: ""} in network mk-cert-options-323073: {Iface:virbr3 ExpiryTime:2024-08-29 21:15:38 +0000 UTC Type:0 Mac:52:54:00:b0:5e:2d Iaid: IPaddr:192.168.83.47 Prefix:24 Hostname:cert-options-323073 Clientid:01:52:54:00:b0:5e:2d}
	I0829 20:15:44.884447   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined IP address 192.168.83.47 and MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:44.884567   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHPort
	I0829 20:15:44.884705   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHKeyPath
	I0829 20:15:44.884846   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetSSHUsername
	I0829 20:15:44.884974   59477 sshutil.go:53] new ssh client: &{IP:192.168.83.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/cert-options-323073/id_rsa Username:docker}
	I0829 20:15:44.996290   59477 ssh_runner.go:195] Run: systemctl --version
	I0829 20:15:45.002754   59477 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:15:45.172051   59477 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:15:45.179965   59477 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:15:45.180130   59477 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:15:45.197397   59477 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:15:45.197408   59477 start.go:495] detecting cgroup driver to use...
	I0829 20:15:45.197459   59477 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:15:45.217898   59477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:15:45.232498   59477 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:15:45.232538   59477 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:15:45.249157   59477 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:15:45.264237   59477 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:15:45.390768   59477 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:15:45.558462   59477 docker.go:233] disabling docker service ...
	I0829 20:15:45.558526   59477 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:15:45.575666   59477 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:15:45.589159   59477 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:15:45.723148   59477 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:15:45.864643   59477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:15:45.880768   59477 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:15:45.901411   59477 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 20:15:45.901465   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:15:45.911821   59477 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:15:45.911882   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:15:45.921965   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:15:45.932040   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:15:45.942516   59477 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:15:45.952564   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:15:45.963235   59477 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:15:45.982037   59477 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:15:45.992800   59477 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:15:46.001813   59477 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:15:46.001851   59477 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:15:46.015803   59477 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:15:46.025853   59477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:15:46.147363   59477 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:15:46.242428   59477 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:15:46.242495   59477 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:15:46.247194   59477 start.go:563] Will wait 60s for crictl version
	I0829 20:15:46.247234   59477 ssh_runner.go:195] Run: which crictl
	I0829 20:15:46.250937   59477 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:15:46.291986   59477 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:15:46.292048   59477 ssh_runner.go:195] Run: crio --version
	I0829 20:15:46.325043   59477 ssh_runner.go:195] Run: crio --version
	I0829 20:15:46.356931   59477 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 20:15:42.845296   59030 pod_ready.go:103] pod "coredns-6f6b679f8f-2vw8t" in "kube-system" namespace has status "Ready":"False"
	I0829 20:15:43.344129   59030 pod_ready.go:93] pod "coredns-6f6b679f8f-2vw8t" in "kube-system" namespace has status "Ready":"True"
	I0829 20:15:43.344150   59030 pod_ready.go:82] duration metric: took 7.005805432s for pod "coredns-6f6b679f8f-2vw8t" in "kube-system" namespace to be "Ready" ...
	I0829 20:15:43.344160   59030 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-427304" in "kube-system" namespace to be "Ready" ...
	I0829 20:15:45.350811   59030 pod_ready.go:103] pod "etcd-pause-427304" in "kube-system" namespace has status "Ready":"False"
	I0829 20:15:46.358284   59477 main.go:141] libmachine: (cert-options-323073) Calling .GetIP
	I0829 20:15:46.361130   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:46.361476   59477 main.go:141] libmachine: (cert-options-323073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:5e:2d", ip: ""} in network mk-cert-options-323073: {Iface:virbr3 ExpiryTime:2024-08-29 21:15:38 +0000 UTC Type:0 Mac:52:54:00:b0:5e:2d Iaid: IPaddr:192.168.83.47 Prefix:24 Hostname:cert-options-323073 Clientid:01:52:54:00:b0:5e:2d}
	I0829 20:15:46.361532   59477 main.go:141] libmachine: (cert-options-323073) DBG | domain cert-options-323073 has defined IP address 192.168.83.47 and MAC address 52:54:00:b0:5e:2d in network mk-cert-options-323073
	I0829 20:15:46.361720   59477 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0829 20:15:46.365831   59477 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:15:46.379048   59477 kubeadm.go:883] updating cluster {Name:cert-options-323073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.31.0 ClusterName:cert-options-323073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.47 Port:8555 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:15:46.379149   59477 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:15:46.379188   59477 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:15:46.415054   59477 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 20:15:46.415108   59477 ssh_runner.go:195] Run: which lz4
	I0829 20:15:46.419247   59477 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 20:15:46.423502   59477 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 20:15:46.423527   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 20:15:47.351384   59030 pod_ready.go:103] pod "etcd-pause-427304" in "kube-system" namespace has status "Ready":"False"
	I0829 20:15:49.351541   59030 pod_ready.go:93] pod "etcd-pause-427304" in "kube-system" namespace has status "Ready":"True"
	I0829 20:15:49.351573   59030 pod_ready.go:82] duration metric: took 6.007405455s for pod "etcd-pause-427304" in "kube-system" namespace to be "Ready" ...
	I0829 20:15:49.351586   59030 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-427304" in "kube-system" namespace to be "Ready" ...
	I0829 20:15:49.356638   59030 pod_ready.go:93] pod "kube-apiserver-pause-427304" in "kube-system" namespace has status "Ready":"True"
	I0829 20:15:49.356660   59030 pod_ready.go:82] duration metric: took 5.0652ms for pod "kube-apiserver-pause-427304" in "kube-system" namespace to be "Ready" ...
	I0829 20:15:49.356672   59030 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-427304" in "kube-system" namespace to be "Ready" ...
	I0829 20:15:49.361529   59030 pod_ready.go:93] pod "kube-controller-manager-pause-427304" in "kube-system" namespace has status "Ready":"True"
	I0829 20:15:49.361550   59030 pod_ready.go:82] duration metric: took 4.87103ms for pod "kube-controller-manager-pause-427304" in "kube-system" namespace to be "Ready" ...
	I0829 20:15:49.361562   59030 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-sxfn5" in "kube-system" namespace to be "Ready" ...
	I0829 20:15:49.366364   59030 pod_ready.go:93] pod "kube-proxy-sxfn5" in "kube-system" namespace has status "Ready":"True"
	I0829 20:15:49.366385   59030 pod_ready.go:82] duration metric: took 4.815611ms for pod "kube-proxy-sxfn5" in "kube-system" namespace to be "Ready" ...
	I0829 20:15:49.366396   59030 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-427304" in "kube-system" namespace to be "Ready" ...
	I0829 20:15:49.371733   59030 pod_ready.go:93] pod "kube-scheduler-pause-427304" in "kube-system" namespace has status "Ready":"True"
	I0829 20:15:49.371760   59030 pod_ready.go:82] duration metric: took 5.355366ms for pod "kube-scheduler-pause-427304" in "kube-system" namespace to be "Ready" ...
	I0829 20:15:49.371770   59030 pod_ready.go:39] duration metric: took 13.03779799s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:15:49.371790   59030 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 20:15:49.386975   59030 ops.go:34] apiserver oom_adj: -16
	I0829 20:15:49.386998   59030 kubeadm.go:597] duration metric: took 20.859422789s to restartPrimaryControlPlane
	I0829 20:15:49.387029   59030 kubeadm.go:394] duration metric: took 20.946091675s to StartCluster
	I0829 20:15:49.387049   59030 settings.go:142] acquiring lock: {Name:mka4cd5ddff5796cd0ca11509c181178f4f73529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:15:49.387137   59030 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:15:49.388819   59030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:15:49.389151   59030 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 20:15:49.389265   59030 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 20:15:49.389398   59030 config.go:182] Loaded profile config "pause-427304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:15:49.391974   59030 out.go:177] * Enabled addons: 
	I0829 20:15:49.391988   59030 out.go:177] * Verifying Kubernetes components...
	I0829 20:15:45.247561   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHHostname
	I0829 20:15:45.250589   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:15:45.250933   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:15:07 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:15:45.250959   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:15:45.251138   59725 provision.go:143] copyHostCerts
	I0829 20:15:45.251183   59725 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:15:45.251192   59725 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:15:45.251239   59725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:15:45.251332   59725 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:15:45.251340   59725 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:15:45.251360   59725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:15:45.251423   59725 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:15:45.251430   59725 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:15:45.251447   59725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:15:45.251509   59725 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-714305 san=[127.0.0.1 192.168.39.140 kubernetes-upgrade-714305 localhost minikube]
	I0829 20:15:45.357620   59725 provision.go:177] copyRemoteCerts
	I0829 20:15:45.357678   59725 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:15:45.357700   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHHostname
	I0829 20:15:45.360571   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:15:45.360904   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:15:07 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:15:45.360932   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:15:45.361097   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHPort
	I0829 20:15:45.361282   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:15:45.361471   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHUsername
	I0829 20:15:45.361604   59725 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/kubernetes-upgrade-714305/id_rsa Username:docker}
	I0829 20:15:45.452590   59725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:15:45.478776   59725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0829 20:15:45.508441   59725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 20:15:45.537750   59725 provision.go:87] duration metric: took 294.100934ms to configureAuth
	I0829 20:15:45.537778   59725 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:15:45.537995   59725 config.go:182] Loaded profile config "kubernetes-upgrade-714305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:15:45.538108   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHHostname
	I0829 20:15:45.540767   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:15:45.541134   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:15:07 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:15:45.541167   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:15:45.541371   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHPort
	I0829 20:15:45.541549   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:15:45.541730   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:15:45.541908   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHUsername
	I0829 20:15:45.542081   59725 main.go:141] libmachine: Using SSH client type: native
	I0829 20:15:45.542237   59725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0829 20:15:45.542252   59725 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:15:49.393277   59030 addons.go:510] duration metric: took 4.013919ms for enable addons: enabled=[]
	I0829 20:15:49.393324   59030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:15:49.579104   59030 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:15:49.597196   59030 node_ready.go:35] waiting up to 6m0s for node "pause-427304" to be "Ready" ...
	I0829 20:15:49.600287   59030 node_ready.go:49] node "pause-427304" has status "Ready":"True"
	I0829 20:15:49.600308   59030 node_ready.go:38] duration metric: took 3.079663ms for node "pause-427304" to be "Ready" ...
	I0829 20:15:49.600320   59030 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:15:49.752226   59030 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-2vw8t" in "kube-system" namespace to be "Ready" ...
	I0829 20:15:50.149370   59030 pod_ready.go:93] pod "coredns-6f6b679f8f-2vw8t" in "kube-system" namespace has status "Ready":"True"
	I0829 20:15:50.149390   59030 pod_ready.go:82] duration metric: took 397.131589ms for pod "coredns-6f6b679f8f-2vw8t" in "kube-system" namespace to be "Ready" ...
	I0829 20:15:50.149400   59030 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-427304" in "kube-system" namespace to be "Ready" ...
	I0829 20:15:50.548681   59030 pod_ready.go:93] pod "etcd-pause-427304" in "kube-system" namespace has status "Ready":"True"
	I0829 20:15:50.548701   59030 pod_ready.go:82] duration metric: took 399.294702ms for pod "etcd-pause-427304" in "kube-system" namespace to be "Ready" ...
	I0829 20:15:50.548710   59030 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-427304" in "kube-system" namespace to be "Ready" ...
	I0829 20:15:50.948917   59030 pod_ready.go:93] pod "kube-apiserver-pause-427304" in "kube-system" namespace has status "Ready":"True"
	I0829 20:15:50.948948   59030 pod_ready.go:82] duration metric: took 400.230256ms for pod "kube-apiserver-pause-427304" in "kube-system" namespace to be "Ready" ...
	I0829 20:15:50.948963   59030 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-427304" in "kube-system" namespace to be "Ready" ...
	I0829 20:15:51.349675   59030 pod_ready.go:93] pod "kube-controller-manager-pause-427304" in "kube-system" namespace has status "Ready":"True"
	I0829 20:15:51.349702   59030 pod_ready.go:82] duration metric: took 400.730726ms for pod "kube-controller-manager-pause-427304" in "kube-system" namespace to be "Ready" ...
	I0829 20:15:51.349716   59030 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sxfn5" in "kube-system" namespace to be "Ready" ...
	I0829 20:15:47.756946   59477 crio.go:462] duration metric: took 1.337723724s to copy over tarball
	I0829 20:15:47.757047   59477 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 20:15:49.913361   59477 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.15626361s)
	I0829 20:15:49.913381   59477 crio.go:469] duration metric: took 2.156415742s to extract the tarball
	I0829 20:15:49.913389   59477 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 20:15:49.950654   59477 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:15:50.000202   59477 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 20:15:50.000215   59477 cache_images.go:84] Images are preloaded, skipping loading
	I0829 20:15:50.000222   59477 kubeadm.go:934] updating node { 192.168.83.47 8555 v1.31.0 crio true true} ...
	I0829 20:15:50.000315   59477 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-options-323073 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:cert-options-323073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:15:50.000376   59477 ssh_runner.go:195] Run: crio config
	I0829 20:15:50.052423   59477 cni.go:84] Creating CNI manager for ""
	I0829 20:15:50.052445   59477 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:15:50.052482   59477 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:15:50.052509   59477 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.47 APIServerPort:8555 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-options-323073 NodeName:cert-options-323073 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 20:15:50.052758   59477 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.47
	  bindPort: 8555
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-options-323073"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.47
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.47"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8555
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:15:50.052828   59477 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 20:15:50.063664   59477 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:15:50.063714   59477 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:15:50.073232   59477 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0829 20:15:50.090482   59477 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:15:50.107454   59477 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0829 20:15:50.124147   59477 ssh_runner.go:195] Run: grep 192.168.83.47	control-plane.minikube.internal$ /etc/hosts
	I0829 20:15:50.128222   59477 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.47	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:15:50.141809   59477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:15:50.279839   59477 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:15:50.299524   59477 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073 for IP: 192.168.83.47
	I0829 20:15:50.299534   59477 certs.go:194] generating shared ca certs ...
	I0829 20:15:50.299552   59477 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:15:50.299712   59477 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:15:50.299766   59477 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:15:50.299774   59477 certs.go:256] generating profile certs ...
	I0829 20:15:50.299863   59477 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/client.key
	I0829 20:15:50.299884   59477 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/client.crt with IP's: []
	I0829 20:15:50.632728   59477 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/client.crt ...
	I0829 20:15:50.632743   59477 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/client.crt: {Name:mkcdf2029457ae4d4656656537724793962eabf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:15:50.632912   59477 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/client.key ...
	I0829 20:15:50.632918   59477 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/client.key: {Name:mk2fc38283af3483a6463048182e90db12beed53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:15:50.632993   59477 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/apiserver.key.d3e1d476
	I0829 20:15:50.633004   59477 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/apiserver.crt.d3e1d476 with IP's: [127.0.0.1 192.168.15.15 10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.47]
	I0829 20:15:50.860068   59477 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/apiserver.crt.d3e1d476 ...
	I0829 20:15:50.860081   59477 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/apiserver.crt.d3e1d476: {Name:mkad1eca50afe9980025420214a26c4163e77a59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:15:50.860229   59477 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/apiserver.key.d3e1d476 ...
	I0829 20:15:50.860236   59477 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/apiserver.key.d3e1d476: {Name:mk93a6f0ded976fea772ba3e549712ef01a67e96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:15:50.860300   59477 certs.go:381] copying /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/apiserver.crt.d3e1d476 -> /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/apiserver.crt
	I0829 20:15:50.860387   59477 certs.go:385] copying /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/apiserver.key.d3e1d476 -> /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/apiserver.key
	I0829 20:15:50.860435   59477 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/proxy-client.key
	I0829 20:15:50.860445   59477 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/proxy-client.crt with IP's: []
	I0829 20:15:51.342988   59477 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/proxy-client.crt ...
	I0829 20:15:51.343006   59477 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/proxy-client.crt: {Name:mk0a59883351bf89f1ffe86cba0b84bae651baf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:15:51.343172   59477 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/proxy-client.key ...
	I0829 20:15:51.343181   59477 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/proxy-client.key: {Name:mk401f448a2d0e5ef0ea2648132ddd51c5e9c116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:15:51.343339   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:15:51.343369   59477 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:15:51.343374   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:15:51.343394   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:15:51.343413   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:15:51.343428   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:15:51.343459   59477 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:15:51.343990   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:15:51.383376   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:15:51.412757   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:15:51.437734   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:15:51.465293   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1480 bytes)
	I0829 20:15:51.491886   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 20:15:51.519194   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:15:51.547958   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 20:15:51.578748   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:15:51.604890   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:15:51.631765   59477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:15:51.659083   59477 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:15:51.678831   59477 ssh_runner.go:195] Run: openssl version
	I0829 20:15:51.684759   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:15:51.697726   59477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:15:51.703800   59477 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:15:51.703849   59477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:15:51.710191   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:15:51.722118   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:15:51.733373   59477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:15:51.738598   59477 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:15:51.738655   59477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:15:51.745348   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:15:51.757204   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:15:51.768723   59477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:15:51.773797   59477 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:15:51.773839   59477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:15:51.779841   59477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:15:51.791338   59477 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:15:51.796049   59477 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 20:15:51.796099   59477 kubeadm.go:392] StartCluster: {Name:cert-options-323073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:cert-options-323073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.47 Port:8555 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:15:51.796162   59477 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:15:51.796203   59477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:15:51.840314   59477 cri.go:89] found id: ""
	I0829 20:15:51.840373   59477 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:15:51.850314   59477 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:15:51.859773   59477 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:15:51.870025   59477 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:15:51.870034   59477 kubeadm.go:157] found existing configuration files:
	
	I0829 20:15:51.870071   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/admin.conf
	I0829 20:15:51.878831   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:15:51.878887   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:15:51.889009   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/kubelet.conf
	I0829 20:15:51.898904   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:15:51.898954   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:15:51.909036   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/controller-manager.conf
	I0829 20:15:51.918829   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:15:51.918875   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:15:51.928487   59477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/scheduler.conf
	I0829 20:15:51.941167   59477 kubeadm.go:163] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:15:51.941206   59477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:15:51.953844   59477 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:15:52.067066   59477 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 20:15:52.067221   59477 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:15:52.193175   59477 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:15:52.193263   59477 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:15:52.193390   59477 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 20:15:52.202694   59477 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:15:51.750815   59030 pod_ready.go:93] pod "kube-proxy-sxfn5" in "kube-system" namespace has status "Ready":"True"
	I0829 20:15:51.750836   59030 pod_ready.go:82] duration metric: took 401.112423ms for pod "kube-proxy-sxfn5" in "kube-system" namespace to be "Ready" ...
	I0829 20:15:51.750847   59030 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-427304" in "kube-system" namespace to be "Ready" ...
	I0829 20:15:52.149718   59030 pod_ready.go:93] pod "kube-scheduler-pause-427304" in "kube-system" namespace has status "Ready":"True"
	I0829 20:15:52.149746   59030 pod_ready.go:82] duration metric: took 398.892501ms for pod "kube-scheduler-pause-427304" in "kube-system" namespace to be "Ready" ...
	I0829 20:15:52.149756   59030 pod_ready.go:39] duration metric: took 2.549424298s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:15:52.149773   59030 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:15:52.149832   59030 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:15:52.166480   59030 api_server.go:72] duration metric: took 2.777289299s to wait for apiserver process to appear ...
	I0829 20:15:52.166513   59030 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:15:52.166586   59030 api_server.go:253] Checking apiserver healthz at https://192.168.50.229:8443/healthz ...
	I0829 20:15:52.172265   59030 api_server.go:279] https://192.168.50.229:8443/healthz returned 200:
	ok
	I0829 20:15:52.173472   59030 api_server.go:141] control plane version: v1.31.0
	I0829 20:15:52.173500   59030 api_server.go:131] duration metric: took 6.975705ms to wait for apiserver health ...
	I0829 20:15:52.173510   59030 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:15:52.353706   59030 system_pods.go:59] 6 kube-system pods found
	I0829 20:15:52.353739   59030 system_pods.go:61] "coredns-6f6b679f8f-2vw8t" [66abdfbd-c5d8-4753-b6f2-8c6a62504b09] Running
	I0829 20:15:52.353747   59030 system_pods.go:61] "etcd-pause-427304" [670ae940-1a17-48f3-82ef-12d93d949f0b] Running
	I0829 20:15:52.353752   59030 system_pods.go:61] "kube-apiserver-pause-427304" [11ac4e41-9614-4187-bfed-82b255e968d5] Running
	I0829 20:15:52.353758   59030 system_pods.go:61] "kube-controller-manager-pause-427304" [673d5c06-1d07-436a-8c20-427f31543d16] Running
	I0829 20:15:52.353763   59030 system_pods.go:61] "kube-proxy-sxfn5" [ba344be5-d93e-4221-a2c1-95ef5db9b864] Running
	I0829 20:15:52.353768   59030 system_pods.go:61] "kube-scheduler-pause-427304" [01b9aeec-bf9d-4e4b-bbf0-c39e33bdc239] Running
	I0829 20:15:52.353775   59030 system_pods.go:74] duration metric: took 180.257876ms to wait for pod list to return data ...
	I0829 20:15:52.353785   59030 default_sa.go:34] waiting for default service account to be created ...
	I0829 20:15:52.548807   59030 default_sa.go:45] found service account: "default"
	I0829 20:15:52.548838   59030 default_sa.go:55] duration metric: took 195.045746ms for default service account to be created ...
	I0829 20:15:52.548851   59030 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 20:15:52.751163   59030 system_pods.go:86] 6 kube-system pods found
	I0829 20:15:52.751193   59030 system_pods.go:89] "coredns-6f6b679f8f-2vw8t" [66abdfbd-c5d8-4753-b6f2-8c6a62504b09] Running
	I0829 20:15:52.751201   59030 system_pods.go:89] "etcd-pause-427304" [670ae940-1a17-48f3-82ef-12d93d949f0b] Running
	I0829 20:15:52.751208   59030 system_pods.go:89] "kube-apiserver-pause-427304" [11ac4e41-9614-4187-bfed-82b255e968d5] Running
	I0829 20:15:52.751213   59030 system_pods.go:89] "kube-controller-manager-pause-427304" [673d5c06-1d07-436a-8c20-427f31543d16] Running
	I0829 20:15:52.751217   59030 system_pods.go:89] "kube-proxy-sxfn5" [ba344be5-d93e-4221-a2c1-95ef5db9b864] Running
	I0829 20:15:52.751222   59030 system_pods.go:89] "kube-scheduler-pause-427304" [01b9aeec-bf9d-4e4b-bbf0-c39e33bdc239] Running
	I0829 20:15:52.751231   59030 system_pods.go:126] duration metric: took 202.373394ms to wait for k8s-apps to be running ...
	I0829 20:15:52.751241   59030 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 20:15:52.751298   59030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:15:52.771446   59030 system_svc.go:56] duration metric: took 20.196212ms WaitForService to wait for kubelet
	I0829 20:15:52.771481   59030 kubeadm.go:582] duration metric: took 3.382293184s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:15:52.771505   59030 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:15:52.950080   59030 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:15:52.950102   59030 node_conditions.go:123] node cpu capacity is 2
	I0829 20:15:52.950114   59030 node_conditions.go:105] duration metric: took 178.603103ms to run NodePressure ...
	I0829 20:15:52.950127   59030 start.go:241] waiting for startup goroutines ...
	I0829 20:15:52.950136   59030 start.go:246] waiting for cluster config update ...
	I0829 20:15:52.950147   59030 start.go:255] writing updated cluster config ...
	I0829 20:15:53.005248   59030 ssh_runner.go:195] Run: rm -f paused
	I0829 20:15:53.070912   59030 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 20:15:53.264797   59030 out.go:177] * Done! kubectl is now configured to use "pause-427304" cluster and "default" namespace by default
	I0829 20:15:51.555553   59725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:15:51.555583   59725 machine.go:96] duration metric: took 6.654590562s to provisionDockerMachine
	I0829 20:15:51.555597   59725 start.go:293] postStartSetup for "kubernetes-upgrade-714305" (driver="kvm2")
	I0829 20:15:51.555611   59725 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:15:51.555657   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .DriverName
	I0829 20:15:51.556120   59725 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:15:51.556153   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHHostname
	I0829 20:15:51.559200   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:15:51.559691   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:15:07 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:15:51.559720   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:15:51.559920   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHPort
	I0829 20:15:51.560133   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:15:51.560302   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHUsername
	I0829 20:15:51.560480   59725 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/kubernetes-upgrade-714305/id_rsa Username:docker}
	I0829 20:15:51.649602   59725 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:15:51.655859   59725 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:15:51.655887   59725 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:15:51.655945   59725 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:15:51.656029   59725 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:15:51.656130   59725 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:15:51.668833   59725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:15:51.696536   59725 start.go:296] duration metric: took 140.925961ms for postStartSetup
	I0829 20:15:51.696581   59725 fix.go:56] duration metric: took 6.820748691s for fixHost
	I0829 20:15:51.696605   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHHostname
	I0829 20:15:51.699473   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:15:51.699813   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:15:07 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:15:51.699846   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:15:51.700013   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHPort
	I0829 20:15:51.700224   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:15:51.700414   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:15:51.700568   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHUsername
	I0829 20:15:51.700716   59725 main.go:141] libmachine: Using SSH client type: native
	I0829 20:15:51.700876   59725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0829 20:15:51.700888   59725 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:15:51.803638   59725 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724962551.759553742
	
	I0829 20:15:51.803666   59725 fix.go:216] guest clock: 1724962551.759553742
	I0829 20:15:51.803673   59725 fix.go:229] Guest: 2024-08-29 20:15:51.759553742 +0000 UTC Remote: 2024-08-29 20:15:51.696585802 +0000 UTC m=+21.497958032 (delta=62.96794ms)
	I0829 20:15:51.803693   59725 fix.go:200] guest clock delta is within tolerance: 62.96794ms
	I0829 20:15:51.803700   59725 start.go:83] releasing machines lock for "kubernetes-upgrade-714305", held for 6.927896353s
	I0829 20:15:51.803723   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .DriverName
	I0829 20:15:51.804020   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetIP
	I0829 20:15:51.807026   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:15:51.807379   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:15:07 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:15:51.807407   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:15:51.807565   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .DriverName
	I0829 20:15:51.808226   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .DriverName
	I0829 20:15:51.808416   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .DriverName
	I0829 20:15:51.808501   59725 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:15:51.808541   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHHostname
	I0829 20:15:51.808658   59725 ssh_runner.go:195] Run: cat /version.json
	I0829 20:15:51.808682   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHHostname
	I0829 20:15:51.811343   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:15:51.811432   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:15:51.811710   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:15:07 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:15:51.811763   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:15:07 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:15:51.811802   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:15:51.811823   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:15:51.811991   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHPort
	I0829 20:15:51.812187   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:15:51.812217   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHPort
	I0829 20:15:51.812401   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHKeyPath
	I0829 20:15:51.812406   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHUsername
	I0829 20:15:51.812542   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetSSHUsername
	I0829 20:15:51.812558   59725 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/kubernetes-upgrade-714305/id_rsa Username:docker}
	I0829 20:15:51.812676   59725 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/kubernetes-upgrade-714305/id_rsa Username:docker}
	I0829 20:15:51.888290   59725 ssh_runner.go:195] Run: systemctl --version
	I0829 20:15:51.915519   59725 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:15:52.083166   59725 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:15:52.091216   59725 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:15:52.091286   59725 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:15:52.103220   59725 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0829 20:15:52.103251   59725 start.go:495] detecting cgroup driver to use...
	I0829 20:15:52.103319   59725 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:15:52.121864   59725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:15:52.137699   59725 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:15:52.137762   59725 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:15:52.153489   59725 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:15:52.170763   59725 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:15:52.320192   59725 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:15:52.465806   59725 docker.go:233] disabling docker service ...
	I0829 20:15:52.465881   59725 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:15:52.483500   59725 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:15:52.498835   59725 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:15:52.676738   59725 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:15:52.865925   59725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:15:52.893849   59725 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:15:52.947610   59725 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 20:15:52.947676   59725 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:15:52.960769   59725 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:15:52.960828   59725 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:15:52.971820   59725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:15:52.985785   59725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:15:52.999621   59725 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:15:53.013717   59725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:15:53.029677   59725 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:15:53.045160   59725 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:15:53.058430   59725 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:15:53.071069   59725 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:15:53.082056   59725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:15:53.263988   59725 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:15:53.731020   59725 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:15:53.731097   59725 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:15:53.737412   59725 start.go:563] Will wait 60s for crictl version
	I0829 20:15:53.737477   59725 ssh_runner.go:195] Run: which crictl
	I0829 20:15:53.742472   59725 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:15:53.784836   59725 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:15:53.784968   59725 ssh_runner.go:195] Run: crio --version
	I0829 20:15:53.816121   59725 ssh_runner.go:195] Run: crio --version
	I0829 20:15:53.851258   59725 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 20:15:52.387233   59477 out.go:235]   - Generating certificates and keys ...
	I0829 20:15:52.387356   59477 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:15:52.387462   59477 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:15:52.465871   59477 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0829 20:15:52.620454   59477 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0829 20:15:52.678686   59477 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0829 20:15:52.896409   59477 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0829 20:15:53.032691   59477 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0829 20:15:53.033062   59477 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [cert-options-323073 localhost] and IPs [192.168.83.47 127.0.0.1 ::1]
	I0829 20:15:53.172811   59477 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0829 20:15:53.173117   59477 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [cert-options-323073 localhost] and IPs [192.168.83.47 127.0.0.1 ::1]
	I0829 20:15:53.427857   59477 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0829 20:15:53.516583   59477 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0829 20:15:53.898002   59477 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0829 20:15:53.898301   59477 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:15:54.279900   59477 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:15:54.577468   59477 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 20:15:54.730516   59477 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:15:54.857980   59477 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:15:55.116562   59477 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:15:55.117596   59477 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:15:55.126142   59477 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:15:53.852699   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) Calling .GetIP
	I0829 20:15:53.855531   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:15:53.855884   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:3b:44", ip: ""} in network mk-kubernetes-upgrade-714305: {Iface:virbr1 ExpiryTime:2024-08-29 21:15:07 +0000 UTC Type:0 Mac:52:54:00:23:3b:44 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:kubernetes-upgrade-714305 Clientid:01:52:54:00:23:3b:44}
	I0829 20:15:53.855918   59725 main.go:141] libmachine: (kubernetes-upgrade-714305) DBG | domain kubernetes-upgrade-714305 has defined IP address 192.168.39.140 and MAC address 52:54:00:23:3b:44 in network mk-kubernetes-upgrade-714305
	I0829 20:15:53.856264   59725 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 20:15:53.860605   59725 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-714305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:kubernetes-upgrade-714305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:15:53.860741   59725 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:15:53.860805   59725 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:15:53.918780   59725 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 20:15:53.918806   59725 crio.go:433] Images already preloaded, skipping extraction
	I0829 20:15:53.918872   59725 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:15:53.961819   59725 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 20:15:53.961849   59725 cache_images.go:84] Images are preloaded, skipping loading
	I0829 20:15:53.961858   59725 kubeadm.go:934] updating node { 192.168.39.140 8443 v1.31.0 crio true true} ...
	I0829 20:15:53.961978   59725 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-714305 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-714305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:15:53.962077   59725 ssh_runner.go:195] Run: crio config
	I0829 20:15:54.016085   59725 cni.go:84] Creating CNI manager for ""
	I0829 20:15:54.016107   59725 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:15:54.016129   59725 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:15:54.016157   59725 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.140 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-714305 NodeName:kubernetes-upgrade-714305 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 20:15:54.016316   59725 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.140
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-714305"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:15:54.016386   59725 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 20:15:54.028670   59725 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:15:54.028764   59725 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:15:54.040135   59725 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0829 20:15:54.063716   59725 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:15:54.083508   59725 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0829 20:15:54.112038   59725 ssh_runner.go:195] Run: grep 192.168.39.140	control-plane.minikube.internal$ /etc/hosts
	I0829 20:15:54.117382   59725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:15:54.274131   59725 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:15:54.291174   59725 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305 for IP: 192.168.39.140
	I0829 20:15:54.291200   59725 certs.go:194] generating shared ca certs ...
	I0829 20:15:54.291221   59725 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:15:54.291367   59725 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:15:54.291424   59725 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:15:54.291440   59725 certs.go:256] generating profile certs ...
	I0829 20:15:54.291576   59725 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/client.key
	I0829 20:15:54.291647   59725 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/apiserver.key.53a13bde
	I0829 20:15:54.291698   59725 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/proxy-client.key
	I0829 20:15:54.291853   59725 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:15:54.291895   59725 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:15:54.291908   59725 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:15:54.291940   59725 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:15:54.291974   59725 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:15:54.292009   59725 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:15:54.292073   59725 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:15:54.292839   59725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:15:54.322788   59725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:15:54.356351   59725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:15:54.384807   59725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:15:54.411701   59725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0829 20:15:54.435787   59725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 20:15:54.461265   59725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:15:54.493163   59725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 20:15:54.518596   59725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:15:54.545685   59725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:15:54.571813   59725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:15:54.600356   59725 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:15:54.621004   59725 ssh_runner.go:195] Run: openssl version
	I0829 20:15:54.627715   59725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:15:54.639544   59725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:15:54.644996   59725 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:15:54.645041   59725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:15:54.651202   59725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:15:54.663699   59725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:15:54.675275   59725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:15:54.680137   59725 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:15:54.680194   59725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:15:54.686206   59725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:15:54.696065   59725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:15:54.708746   59725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:15:54.714134   59725 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:15:54.714184   59725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:15:54.720422   59725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:15:54.731601   59725 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:15:54.736513   59725 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:15:54.742646   59725 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:15:54.748437   59725 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:15:54.754375   59725 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:15:54.760307   59725 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:15:54.766101   59725 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:15:54.772019   59725 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-714305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0 ClusterName:kubernetes-upgrade-714305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:15:54.772120   59725 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:15:54.772164   59725 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:15:54.811442   59725 cri.go:89] found id: "262cfa5441e80223ac15083c39d2ae6cf9fcf3bfdc3529f511d3c1d644896481"
	I0829 20:15:54.811466   59725 cri.go:89] found id: "8c7494e24795d12be457a4f6e2d8e328c8a9fbd393a4dd1f05bccef1f8830906"
	I0829 20:15:54.811471   59725 cri.go:89] found id: "43e87813da79efca3141b75f04586f5ddfd7529c690ee0fc6718ceb88ed79c02"
	I0829 20:15:54.811476   59725 cri.go:89] found id: "7f9545f04af49642850f679fc33fa10f744705ffb3df622985360374d23a0fbc"
	I0829 20:15:54.811495   59725 cri.go:89] found id: "520bd13d85244d6c73becdc2c9f79a405c3cf3f1399b0a1cd2fefdaab10335df"
	I0829 20:15:54.811499   59725 cri.go:89] found id: "feb04dfcf677b69c7944d54ff5a82340f203e2d83d573d32c9daa6b44286796f"
	I0829 20:15:54.811503   59725 cri.go:89] found id: "db1a7f19936e46981f95033a6bb93d9aa8077acb584f2b799fd04ffb9f02a9f2"
	I0829 20:15:54.811507   59725 cri.go:89] found id: "4effe6e868fdc78d1812be87825356899940838b4806bc44909bb83f9bcddb8d"
	I0829 20:15:54.811510   59725 cri.go:89] found id: ""
	I0829 20:15:54.811564   59725 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 29 20:16:05 kubernetes-upgrade-714305 crio[2345]: time="2024-08-29 20:16:05.778532328Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724962565778441992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c104666b-2cd7-47a2-83b8-bd1a9a2acfc0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:16:05 kubernetes-upgrade-714305 crio[2345]: time="2024-08-29 20:16:05.779425068Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d94f9da7-f7b9-4a15-887c-81dd99add449 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:16:05 kubernetes-upgrade-714305 crio[2345]: time="2024-08-29 20:16:05.779519793Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d94f9da7-f7b9-4a15-887c-81dd99add449 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:16:05 kubernetes-upgrade-714305 crio[2345]: time="2024-08-29 20:16:05.779891896Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2521102d484df5bab414cd05c6aec319d870d6b1dd6a2af9b0781a30a5fb7fe9,PodSandboxId:646262202a68b12297179a3c19a204c532044941bd955a38fb79620e8d043eb5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724962562080662760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10d159db-80b4-45b9-95e8-17baf27c094d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458f49079510f6b409a2d30eef1ed195d3414196930b30309bcbc4cac116f8b7,PodSandboxId:2d6cd20e4c8634fe865542580107adfcc31a3c3af531f33d213dc6ae51986c00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724962562018556833,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5xqg5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c11e0ab2-7c49-4129-bc9a-0a6c1a0f0db4,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172f4841d1889c1dfc3b34845674feb199f96870f148e2f29b75ac59c26903f0,PodSandboxId:21df3935302e80ce929b11115a123845cc346e1692a9d04daa1c58faccf0a5d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724962561711296832,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gkrdh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff181af-1972-44f2-b931-ef456940a043,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f729d4f9245ee1f31590babd0e28f98c30ac9655cdf42f8717fa74d9ac5cc09f,PodSandboxId:cfe4d70cf3c98b16b66bab83efb8d27b8bb93f1431f70bbdd4e6fec9b0125ca9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724962561707009408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-cg5tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2b186e0-1e4e-4799-a732-8b4
8655c078b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f89612ef0204d1f494113410422bf805fde005d3b3254f4d9bda1f002d758678,PodSandboxId:5cb0463c999b07159f31f57b1333340bdcc2e073f4f9c3c87dabc3c6dcc3d6d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724962558153211867,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12f5f6ac40f07c7fe8634061cbb46ee4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e348e1896424cc97edada10b250707e21f898afbe0d0f7904487f37211194418,PodSandboxId:c9ea2c2147db63a62f8f295ffc441d44c1550ffee8cff40d0f745f5f4b7e607b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724962557984545170,Labels
:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33a9fa163f1170a805725975cfdb8dde,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e923d3e9ad170220d10771bc8b8a9df4ed95e1c35c6fdae5d3353ab755612b1,PodSandboxId:d3a2d8c305f7b8d832d03a3f6db5e5234761ab038434681f3fbadc870518c5bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724962555472347254,Labels:map[string]str
ing{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2c79e7272f7cd574285da1e6d86e39,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dacf45f4c2f938aadcf2ea4dec7c1a3ee420a330540f4a73eb123a74a45fdde,PodSandboxId:4f305d622396ae48f3474e04489ccea1108bdffcf02cd67ce3319150c82ac520,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724962555272966794,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9efbf3217a0ebe469f4609fe2e695342,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:155341d527463dcba6d61a26912bbbe9170726deaf94af0666f8e5da9dbee20b,PodSandboxId:c9ea2c2147db63a62f8f295ffc441d44c1550ffee8cff40d0f745f5f4b7e607b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_CREATED,CreatedAt:1724962555113537227,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33a9fa163f1170a805725975cfdb8dde,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262cfa5441e80223ac15083c39d2ae6cf9fcf3bfdc3529f511d3c1d644896481,PodSandboxId:2cabf16f83d1578cde9f72bf904179569149157764a9831a42544bcaccecc9bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724962535103198366,Labels:map[string]string{io.kubernete
s.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gkrdh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff181af-1972-44f2-b931-ef456940a043,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c7494e24795d12be457a4f6e2d8e328c8a9fbd393a4dd1f05bccef1f8830906,PodSandboxId:ee9f1c6dd8376f94b1178af461833422b21ceb669d130379e6ec427cb9a84e6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724962535010392400,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-cg5tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2b186e0-1e4e-4799-a732-8b48655c078b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e87813da79efca3141b75f04586f5ddfd7529c690ee0fc6718ceb88ed79c02,PodSandboxId:b216896df20cbc940c39c03e4cbb1657f614bc253a7e6b756c41
d4d0b88e01e9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724962534437002041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10d159db-80b4-45b9-95e8-17baf27c094d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f9545f04af49642850f679fc33fa10f744705ffb3df622985360374d23a0fbc,PodSandboxId:8851ae71f2d0ad49916103efeea823b3d705f5fc488353056ced98958cce4391,
Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724962534310097548,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5xqg5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c11e0ab2-7c49-4129-bc9a-0a6c1a0f0db4,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:520bd13d85244d6c73becdc2c9f79a405c3cf3f1399b0a1cd2fefdaab10335df,PodSandboxId:31366f8c5046976223c7bc55483445be80ad1cb404a08188cb118a21a7bdf1b8,Metadata:&ContainerMetadata{Name:et
cd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724962523850872117,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2c79e7272f7cd574285da1e6d86e39,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feb04dfcf677b69c7944d54ff5a82340f203e2d83d573d32c9daa6b44286796f,PodSandboxId:69321c913bb3d404f693ad92dd2ca1e2ffae651c87d12ff25e4dd4d263e2c13e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&Imag
eSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724962523830342040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12f5f6ac40f07c7fe8634061cbb46ee4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4effe6e868fdc78d1812be87825356899940838b4806bc44909bb83f9bcddb8d,PodSandboxId:58795206186c94196d0aa3e012b6b4d7a7053f497eef49486faf384e7b5dbd36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&I
mageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724962523803669784,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9efbf3217a0ebe469f4609fe2e695342,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d94f9da7-f7b9-4a15-887c-81dd99add449 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:16:05 kubernetes-upgrade-714305 crio[2345]: time="2024-08-29 20:16:05.826280443Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a3b5a10a-d6b3-4d6f-869a-218fadeb8c19 name=/runtime.v1.RuntimeService/Version
	Aug 29 20:16:05 kubernetes-upgrade-714305 crio[2345]: time="2024-08-29 20:16:05.826444315Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a3b5a10a-d6b3-4d6f-869a-218fadeb8c19 name=/runtime.v1.RuntimeService/Version
	Aug 29 20:16:05 kubernetes-upgrade-714305 crio[2345]: time="2024-08-29 20:16:05.827977481Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64ecf29e-05bc-46ce-be80-e28c834078b4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:16:05 kubernetes-upgrade-714305 crio[2345]: time="2024-08-29 20:16:05.828616516Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724962565828576095,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64ecf29e-05bc-46ce-be80-e28c834078b4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:16:05 kubernetes-upgrade-714305 crio[2345]: time="2024-08-29 20:16:05.829252563Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67584a10-d8a8-4717-bf4a-1c8b230d4ea6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:16:05 kubernetes-upgrade-714305 crio[2345]: time="2024-08-29 20:16:05.829364947Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67584a10-d8a8-4717-bf4a-1c8b230d4ea6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:16:05 kubernetes-upgrade-714305 crio[2345]: time="2024-08-29 20:16:05.829983376Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2521102d484df5bab414cd05c6aec319d870d6b1dd6a2af9b0781a30a5fb7fe9,PodSandboxId:646262202a68b12297179a3c19a204c532044941bd955a38fb79620e8d043eb5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724962562080662760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10d159db-80b4-45b9-95e8-17baf27c094d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458f49079510f6b409a2d30eef1ed195d3414196930b30309bcbc4cac116f8b7,PodSandboxId:2d6cd20e4c8634fe865542580107adfcc31a3c3af531f33d213dc6ae51986c00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724962562018556833,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5xqg5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c11e0ab2-7c49-4129-bc9a-0a6c1a0f0db4,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172f4841d1889c1dfc3b34845674feb199f96870f148e2f29b75ac59c26903f0,PodSandboxId:21df3935302e80ce929b11115a123845cc346e1692a9d04daa1c58faccf0a5d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724962561711296832,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gkrdh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff181af-1972-44f2-b931-ef456940a043,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f729d4f9245ee1f31590babd0e28f98c30ac9655cdf42f8717fa74d9ac5cc09f,PodSandboxId:cfe4d70cf3c98b16b66bab83efb8d27b8bb93f1431f70bbdd4e6fec9b0125ca9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724962561707009408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-cg5tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2b186e0-1e4e-4799-a732-8b4
8655c078b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f89612ef0204d1f494113410422bf805fde005d3b3254f4d9bda1f002d758678,PodSandboxId:5cb0463c999b07159f31f57b1333340bdcc2e073f4f9c3c87dabc3c6dcc3d6d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724962558153211867,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12f5f6ac40f07c7fe8634061cbb46ee4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e348e1896424cc97edada10b250707e21f898afbe0d0f7904487f37211194418,PodSandboxId:c9ea2c2147db63a62f8f295ffc441d44c1550ffee8cff40d0f745f5f4b7e607b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724962557984545170,Labels
:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33a9fa163f1170a805725975cfdb8dde,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e923d3e9ad170220d10771bc8b8a9df4ed95e1c35c6fdae5d3353ab755612b1,PodSandboxId:d3a2d8c305f7b8d832d03a3f6db5e5234761ab038434681f3fbadc870518c5bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724962555472347254,Labels:map[string]str
ing{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2c79e7272f7cd574285da1e6d86e39,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dacf45f4c2f938aadcf2ea4dec7c1a3ee420a330540f4a73eb123a74a45fdde,PodSandboxId:4f305d622396ae48f3474e04489ccea1108bdffcf02cd67ce3319150c82ac520,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724962555272966794,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9efbf3217a0ebe469f4609fe2e695342,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:155341d527463dcba6d61a26912bbbe9170726deaf94af0666f8e5da9dbee20b,PodSandboxId:c9ea2c2147db63a62f8f295ffc441d44c1550ffee8cff40d0f745f5f4b7e607b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_CREATED,CreatedAt:1724962555113537227,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33a9fa163f1170a805725975cfdb8dde,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262cfa5441e80223ac15083c39d2ae6cf9fcf3bfdc3529f511d3c1d644896481,PodSandboxId:2cabf16f83d1578cde9f72bf904179569149157764a9831a42544bcaccecc9bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724962535103198366,Labels:map[string]string{io.kubernete
s.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gkrdh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff181af-1972-44f2-b931-ef456940a043,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c7494e24795d12be457a4f6e2d8e328c8a9fbd393a4dd1f05bccef1f8830906,PodSandboxId:ee9f1c6dd8376f94b1178af461833422b21ceb669d130379e6ec427cb9a84e6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724962535010392400,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-cg5tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2b186e0-1e4e-4799-a732-8b48655c078b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e87813da79efca3141b75f04586f5ddfd7529c690ee0fc6718ceb88ed79c02,PodSandboxId:b216896df20cbc940c39c03e4cbb1657f614bc253a7e6b756c41
d4d0b88e01e9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724962534437002041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10d159db-80b4-45b9-95e8-17baf27c094d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f9545f04af49642850f679fc33fa10f744705ffb3df622985360374d23a0fbc,PodSandboxId:8851ae71f2d0ad49916103efeea823b3d705f5fc488353056ced98958cce4391,
Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724962534310097548,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5xqg5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c11e0ab2-7c49-4129-bc9a-0a6c1a0f0db4,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:520bd13d85244d6c73becdc2c9f79a405c3cf3f1399b0a1cd2fefdaab10335df,PodSandboxId:31366f8c5046976223c7bc55483445be80ad1cb404a08188cb118a21a7bdf1b8,Metadata:&ContainerMetadata{Name:et
cd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724962523850872117,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2c79e7272f7cd574285da1e6d86e39,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feb04dfcf677b69c7944d54ff5a82340f203e2d83d573d32c9daa6b44286796f,PodSandboxId:69321c913bb3d404f693ad92dd2ca1e2ffae651c87d12ff25e4dd4d263e2c13e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&Imag
eSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724962523830342040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12f5f6ac40f07c7fe8634061cbb46ee4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4effe6e868fdc78d1812be87825356899940838b4806bc44909bb83f9bcddb8d,PodSandboxId:58795206186c94196d0aa3e012b6b4d7a7053f497eef49486faf384e7b5dbd36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&I
mageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724962523803669784,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9efbf3217a0ebe469f4609fe2e695342,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=67584a10-d8a8-4717-bf4a-1c8b230d4ea6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:16:05 kubernetes-upgrade-714305 crio[2345]: time="2024-08-29 20:16:05.879951814Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b603183c-df19-4a0d-aac7-3608784fd97c name=/runtime.v1.RuntimeService/Version
	Aug 29 20:16:05 kubernetes-upgrade-714305 crio[2345]: time="2024-08-29 20:16:05.880086708Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b603183c-df19-4a0d-aac7-3608784fd97c name=/runtime.v1.RuntimeService/Version
	Aug 29 20:16:05 kubernetes-upgrade-714305 crio[2345]: time="2024-08-29 20:16:05.881344196Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=07d724cd-b1c5-41b8-aa62-229906926f20 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:16:05 kubernetes-upgrade-714305 crio[2345]: time="2024-08-29 20:16:05.881822620Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724962565881799615,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=07d724cd-b1c5-41b8-aa62-229906926f20 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:16:05 kubernetes-upgrade-714305 crio[2345]: time="2024-08-29 20:16:05.882505408Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee2d3f73-8e4f-42b6-b348-57b146ff5d38 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:16:05 kubernetes-upgrade-714305 crio[2345]: time="2024-08-29 20:16:05.882565974Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee2d3f73-8e4f-42b6-b348-57b146ff5d38 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:16:05 kubernetes-upgrade-714305 crio[2345]: time="2024-08-29 20:16:05.883037162Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2521102d484df5bab414cd05c6aec319d870d6b1dd6a2af9b0781a30a5fb7fe9,PodSandboxId:646262202a68b12297179a3c19a204c532044941bd955a38fb79620e8d043eb5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724962562080662760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10d159db-80b4-45b9-95e8-17baf27c094d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458f49079510f6b409a2d30eef1ed195d3414196930b30309bcbc4cac116f8b7,PodSandboxId:2d6cd20e4c8634fe865542580107adfcc31a3c3af531f33d213dc6ae51986c00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724962562018556833,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5xqg5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c11e0ab2-7c49-4129-bc9a-0a6c1a0f0db4,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172f4841d1889c1dfc3b34845674feb199f96870f148e2f29b75ac59c26903f0,PodSandboxId:21df3935302e80ce929b11115a123845cc346e1692a9d04daa1c58faccf0a5d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724962561711296832,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gkrdh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff181af-1972-44f2-b931-ef456940a043,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f729d4f9245ee1f31590babd0e28f98c30ac9655cdf42f8717fa74d9ac5cc09f,PodSandboxId:cfe4d70cf3c98b16b66bab83efb8d27b8bb93f1431f70bbdd4e6fec9b0125ca9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724962561707009408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-cg5tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2b186e0-1e4e-4799-a732-8b4
8655c078b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f89612ef0204d1f494113410422bf805fde005d3b3254f4d9bda1f002d758678,PodSandboxId:5cb0463c999b07159f31f57b1333340bdcc2e073f4f9c3c87dabc3c6dcc3d6d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724962558153211867,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12f5f6ac40f07c7fe8634061cbb46ee4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e348e1896424cc97edada10b250707e21f898afbe0d0f7904487f37211194418,PodSandboxId:c9ea2c2147db63a62f8f295ffc441d44c1550ffee8cff40d0f745f5f4b7e607b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724962557984545170,Labels
:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33a9fa163f1170a805725975cfdb8dde,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e923d3e9ad170220d10771bc8b8a9df4ed95e1c35c6fdae5d3353ab755612b1,PodSandboxId:d3a2d8c305f7b8d832d03a3f6db5e5234761ab038434681f3fbadc870518c5bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724962555472347254,Labels:map[string]str
ing{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2c79e7272f7cd574285da1e6d86e39,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dacf45f4c2f938aadcf2ea4dec7c1a3ee420a330540f4a73eb123a74a45fdde,PodSandboxId:4f305d622396ae48f3474e04489ccea1108bdffcf02cd67ce3319150c82ac520,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724962555272966794,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9efbf3217a0ebe469f4609fe2e695342,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:155341d527463dcba6d61a26912bbbe9170726deaf94af0666f8e5da9dbee20b,PodSandboxId:c9ea2c2147db63a62f8f295ffc441d44c1550ffee8cff40d0f745f5f4b7e607b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_CREATED,CreatedAt:1724962555113537227,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33a9fa163f1170a805725975cfdb8dde,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262cfa5441e80223ac15083c39d2ae6cf9fcf3bfdc3529f511d3c1d644896481,PodSandboxId:2cabf16f83d1578cde9f72bf904179569149157764a9831a42544bcaccecc9bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724962535103198366,Labels:map[string]string{io.kubernete
s.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gkrdh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff181af-1972-44f2-b931-ef456940a043,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c7494e24795d12be457a4f6e2d8e328c8a9fbd393a4dd1f05bccef1f8830906,PodSandboxId:ee9f1c6dd8376f94b1178af461833422b21ceb669d130379e6ec427cb9a84e6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724962535010392400,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-cg5tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2b186e0-1e4e-4799-a732-8b48655c078b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e87813da79efca3141b75f04586f5ddfd7529c690ee0fc6718ceb88ed79c02,PodSandboxId:b216896df20cbc940c39c03e4cbb1657f614bc253a7e6b756c41
d4d0b88e01e9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724962534437002041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10d159db-80b4-45b9-95e8-17baf27c094d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f9545f04af49642850f679fc33fa10f744705ffb3df622985360374d23a0fbc,PodSandboxId:8851ae71f2d0ad49916103efeea823b3d705f5fc488353056ced98958cce4391,
Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724962534310097548,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5xqg5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c11e0ab2-7c49-4129-bc9a-0a6c1a0f0db4,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:520bd13d85244d6c73becdc2c9f79a405c3cf3f1399b0a1cd2fefdaab10335df,PodSandboxId:31366f8c5046976223c7bc55483445be80ad1cb404a08188cb118a21a7bdf1b8,Metadata:&ContainerMetadata{Name:et
cd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724962523850872117,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2c79e7272f7cd574285da1e6d86e39,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feb04dfcf677b69c7944d54ff5a82340f203e2d83d573d32c9daa6b44286796f,PodSandboxId:69321c913bb3d404f693ad92dd2ca1e2ffae651c87d12ff25e4dd4d263e2c13e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&Imag
eSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724962523830342040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12f5f6ac40f07c7fe8634061cbb46ee4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4effe6e868fdc78d1812be87825356899940838b4806bc44909bb83f9bcddb8d,PodSandboxId:58795206186c94196d0aa3e012b6b4d7a7053f497eef49486faf384e7b5dbd36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&I
mageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724962523803669784,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9efbf3217a0ebe469f4609fe2e695342,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee2d3f73-8e4f-42b6-b348-57b146ff5d38 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:16:05 kubernetes-upgrade-714305 crio[2345]: time="2024-08-29 20:16:05.921008229Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5460eafa-5214-45a9-b08d-c865d0c8424d name=/runtime.v1.RuntimeService/Version
	Aug 29 20:16:05 kubernetes-upgrade-714305 crio[2345]: time="2024-08-29 20:16:05.921113040Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5460eafa-5214-45a9-b08d-c865d0c8424d name=/runtime.v1.RuntimeService/Version
	Aug 29 20:16:05 kubernetes-upgrade-714305 crio[2345]: time="2024-08-29 20:16:05.922820142Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=30069678-8c36-4c3c-b1cf-a07931e8604d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:16:05 kubernetes-upgrade-714305 crio[2345]: time="2024-08-29 20:16:05.923373362Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724962565923339344,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=30069678-8c36-4c3c-b1cf-a07931e8604d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:16:05 kubernetes-upgrade-714305 crio[2345]: time="2024-08-29 20:16:05.924423045Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=500c1ca1-4800-41b7-814a-bf7296361db4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:16:05 kubernetes-upgrade-714305 crio[2345]: time="2024-08-29 20:16:05.924603902Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=500c1ca1-4800-41b7-814a-bf7296361db4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:16:05 kubernetes-upgrade-714305 crio[2345]: time="2024-08-29 20:16:05.925075981Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2521102d484df5bab414cd05c6aec319d870d6b1dd6a2af9b0781a30a5fb7fe9,PodSandboxId:646262202a68b12297179a3c19a204c532044941bd955a38fb79620e8d043eb5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724962562080662760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10d159db-80b4-45b9-95e8-17baf27c094d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458f49079510f6b409a2d30eef1ed195d3414196930b30309bcbc4cac116f8b7,PodSandboxId:2d6cd20e4c8634fe865542580107adfcc31a3c3af531f33d213dc6ae51986c00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724962562018556833,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5xqg5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c11e0ab2-7c49-4129-bc9a-0a6c1a0f0db4,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172f4841d1889c1dfc3b34845674feb199f96870f148e2f29b75ac59c26903f0,PodSandboxId:21df3935302e80ce929b11115a123845cc346e1692a9d04daa1c58faccf0a5d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724962561711296832,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gkrdh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff181af-1972-44f2-b931-ef456940a043,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f729d4f9245ee1f31590babd0e28f98c30ac9655cdf42f8717fa74d9ac5cc09f,PodSandboxId:cfe4d70cf3c98b16b66bab83efb8d27b8bb93f1431f70bbdd4e6fec9b0125ca9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724962561707009408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-cg5tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2b186e0-1e4e-4799-a732-8b4
8655c078b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f89612ef0204d1f494113410422bf805fde005d3b3254f4d9bda1f002d758678,PodSandboxId:5cb0463c999b07159f31f57b1333340bdcc2e073f4f9c3c87dabc3c6dcc3d6d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724962558153211867,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12f5f6ac40f07c7fe8634061cbb46ee4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e348e1896424cc97edada10b250707e21f898afbe0d0f7904487f37211194418,PodSandboxId:c9ea2c2147db63a62f8f295ffc441d44c1550ffee8cff40d0f745f5f4b7e607b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724962557984545170,Labels
:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33a9fa163f1170a805725975cfdb8dde,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e923d3e9ad170220d10771bc8b8a9df4ed95e1c35c6fdae5d3353ab755612b1,PodSandboxId:d3a2d8c305f7b8d832d03a3f6db5e5234761ab038434681f3fbadc870518c5bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724962555472347254,Labels:map[string]str
ing{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2c79e7272f7cd574285da1e6d86e39,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dacf45f4c2f938aadcf2ea4dec7c1a3ee420a330540f4a73eb123a74a45fdde,PodSandboxId:4f305d622396ae48f3474e04489ccea1108bdffcf02cd67ce3319150c82ac520,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724962555272966794,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9efbf3217a0ebe469f4609fe2e695342,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:155341d527463dcba6d61a26912bbbe9170726deaf94af0666f8e5da9dbee20b,PodSandboxId:c9ea2c2147db63a62f8f295ffc441d44c1550ffee8cff40d0f745f5f4b7e607b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_CREATED,CreatedAt:1724962555113537227,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33a9fa163f1170a805725975cfdb8dde,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262cfa5441e80223ac15083c39d2ae6cf9fcf3bfdc3529f511d3c1d644896481,PodSandboxId:2cabf16f83d1578cde9f72bf904179569149157764a9831a42544bcaccecc9bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724962535103198366,Labels:map[string]string{io.kubernete
s.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gkrdh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff181af-1972-44f2-b931-ef456940a043,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c7494e24795d12be457a4f6e2d8e328c8a9fbd393a4dd1f05bccef1f8830906,PodSandboxId:ee9f1c6dd8376f94b1178af461833422b21ceb669d130379e6ec427cb9a84e6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724962535010392400,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-cg5tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2b186e0-1e4e-4799-a732-8b48655c078b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e87813da79efca3141b75f04586f5ddfd7529c690ee0fc6718ceb88ed79c02,PodSandboxId:b216896df20cbc940c39c03e4cbb1657f614bc253a7e6b756c41
d4d0b88e01e9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724962534437002041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10d159db-80b4-45b9-95e8-17baf27c094d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f9545f04af49642850f679fc33fa10f744705ffb3df622985360374d23a0fbc,PodSandboxId:8851ae71f2d0ad49916103efeea823b3d705f5fc488353056ced98958cce4391,
Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724962534310097548,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5xqg5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c11e0ab2-7c49-4129-bc9a-0a6c1a0f0db4,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:520bd13d85244d6c73becdc2c9f79a405c3cf3f1399b0a1cd2fefdaab10335df,PodSandboxId:31366f8c5046976223c7bc55483445be80ad1cb404a08188cb118a21a7bdf1b8,Metadata:&ContainerMetadata{Name:et
cd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724962523850872117,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2c79e7272f7cd574285da1e6d86e39,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feb04dfcf677b69c7944d54ff5a82340f203e2d83d573d32c9daa6b44286796f,PodSandboxId:69321c913bb3d404f693ad92dd2ca1e2ffae651c87d12ff25e4dd4d263e2c13e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&Imag
eSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724962523830342040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12f5f6ac40f07c7fe8634061cbb46ee4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4effe6e868fdc78d1812be87825356899940838b4806bc44909bb83f9bcddb8d,PodSandboxId:58795206186c94196d0aa3e012b6b4d7a7053f497eef49486faf384e7b5dbd36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&I
mageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724962523803669784,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-714305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9efbf3217a0ebe469f4609fe2e695342,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=500c1ca1-4800-41b7-814a-bf7296361db4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2521102d484df       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       1                   646262202a68b       storage-provisioner
	458f49079510f       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   3 seconds ago       Running             kube-proxy                1                   2d6cd20e4c863       kube-proxy-5xqg5
	172f4841d1889       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   4 seconds ago       Running             coredns                   1                   21df3935302e8       coredns-6f6b679f8f-gkrdh
	f729d4f9245ee       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   4 seconds ago       Running             coredns                   1                   cfe4d70cf3c98       coredns-6f6b679f8f-cg5tt
	f89612ef0204d       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   7 seconds ago       Running             kube-apiserver            1                   5cb0463c999b0       kube-apiserver-kubernetes-upgrade-714305
	e348e1896424c       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   8 seconds ago       Running             kube-scheduler            2                   c9ea2c2147db6       kube-scheduler-kubernetes-upgrade-714305
	4e923d3e9ad17       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   10 seconds ago      Running             etcd                      1                   d3a2d8c305f7b       etcd-kubernetes-upgrade-714305
	7dacf45f4c2f9       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   10 seconds ago      Running             kube-controller-manager   1                   4f305d622396a       kube-controller-manager-kubernetes-upgrade-714305
	155341d527463       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   10 seconds ago      Created             kube-scheduler            1                   c9ea2c2147db6       kube-scheduler-kubernetes-upgrade-714305
	262cfa5441e80       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   30 seconds ago      Exited              coredns                   0                   2cabf16f83d15       coredns-6f6b679f8f-gkrdh
	8c7494e24795d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   30 seconds ago      Exited              coredns                   0                   ee9f1c6dd8376       coredns-6f6b679f8f-cg5tt
	43e87813da79e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   31 seconds ago      Exited              storage-provisioner       0                   b216896df20cb       storage-provisioner
	7f9545f04af49       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   31 seconds ago      Exited              kube-proxy                0                   8851ae71f2d0a       kube-proxy-5xqg5
	520bd13d85244       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   42 seconds ago      Exited              etcd                      0                   31366f8c50469       etcd-kubernetes-upgrade-714305
	feb04dfcf677b       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   42 seconds ago      Exited              kube-apiserver            0                   69321c913bb3d       kube-apiserver-kubernetes-upgrade-714305
	4effe6e868fdc       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   42 seconds ago      Exited              kube-controller-manager   0                   58795206186c9       kube-controller-manager-kubernetes-upgrade-714305
	
	
	==> coredns [172f4841d1889c1dfc3b34845674feb199f96870f148e2f29b75ac59c26903f0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [262cfa5441e80223ac15083c39d2ae6cf9fcf3bfdc3529f511d3c1d644896481] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: Trace[11100885]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Aug-2024 20:15:35.392) (total time: 10244ms):
	Trace[11100885]: [10.244123196s] [10.244123196s] END
	[INFO] plugin/kubernetes: Trace[1660997827]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Aug-2024 20:15:35.392) (total time: 10244ms):
	Trace[1660997827]: [10.244482861s] [10.244482861s] END
	[INFO] plugin/kubernetes: Trace[723284015]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Aug-2024 20:15:35.394) (total time: 10247ms):
	Trace[723284015]: [10.247544574s] [10.247544574s] END
	
	
	==> coredns [8c7494e24795d12be457a4f6e2d8e328c8a9fbd393a4dd1f05bccef1f8830906] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: Trace[1005273845]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Aug-2024 20:15:35.396) (total time: 10242ms):
	Trace[1005273845]: [10.242053708s] [10.242053708s] END
	[INFO] plugin/kubernetes: Trace[839573942]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Aug-2024 20:15:35.396) (total time: 10242ms):
	Trace[839573942]: [10.242485059s] [10.242485059s] END
	[INFO] plugin/kubernetes: Trace[683495465]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Aug-2024 20:15:35.395) (total time: 10242ms):
	Trace[683495465]: [10.242930291s] [10.242930291s] END
	
	
	==> coredns [f729d4f9245ee1f31590babd0e28f98c30ac9655cdf42f8717fa74d9ac5cc09f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-714305
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-714305
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 20:15:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-714305
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 20:16:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 20:16:00 +0000   Thu, 29 Aug 2024 20:15:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 20:16:00 +0000   Thu, 29 Aug 2024 20:15:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 20:16:00 +0000   Thu, 29 Aug 2024 20:15:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 20:16:00 +0000   Thu, 29 Aug 2024 20:15:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.140
	  Hostname:    kubernetes-upgrade-714305
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f5874967db64c108260aa95bcf30c19
	  System UUID:                8f587496-7db6-4c10-8260-aa95bcf30c19
	  Boot ID:                    4df5628c-26c6-48a0-99f7-551c37aeb18d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-cg5tt                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     32s
	  kube-system                 coredns-6f6b679f8f-gkrdh                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     32s
	  kube-system                 etcd-kubernetes-upgrade-714305                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         31s
	  kube-system                 kube-apiserver-kubernetes-upgrade-714305             250m (12%)    0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-714305    200m (10%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-5xqg5                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-scheduler-kubernetes-upgrade-714305             100m (5%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 31s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 44s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  43s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    43s (x8 over 44s)  kubelet          Node kubernetes-upgrade-714305 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x7 over 44s)  kubelet          Node kubernetes-upgrade-714305 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  43s (x8 over 44s)  kubelet          Node kubernetes-upgrade-714305 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           33s                node-controller  Node kubernetes-upgrade-714305 event: Registered Node kubernetes-upgrade-714305 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-714305 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-714305 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet          Node kubernetes-upgrade-714305 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                 node-controller  Node kubernetes-upgrade-714305 event: Registered Node kubernetes-upgrade-714305 in Controller
	
	
	==> dmesg <==
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.287419] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +0.065568] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062773] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.231818] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.162346] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.359227] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +4.287608] systemd-fstab-generator[715]: Ignoring "noauto" option for root device
	[  +0.058279] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.980145] systemd-fstab-generator[834]: Ignoring "noauto" option for root device
	[  +6.946877] systemd-fstab-generator[1225]: Ignoring "noauto" option for root device
	[  +0.095392] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.094108] kauditd_printk_skb: 23 callbacks suppressed
	[ +17.823825] systemd-fstab-generator[2172]: Ignoring "noauto" option for root device
	[  +0.092955] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.063245] systemd-fstab-generator[2184]: Ignoring "noauto" option for root device
	[  +0.181291] systemd-fstab-generator[2198]: Ignoring "noauto" option for root device
	[  +0.184652] systemd-fstab-generator[2210]: Ignoring "noauto" option for root device
	[  +0.416861] systemd-fstab-generator[2325]: Ignoring "noauto" option for root device
	[  +1.015243] systemd-fstab-generator[2442]: Ignoring "noauto" option for root device
	[  +2.909351] systemd-fstab-generator[2914]: Ignoring "noauto" option for root device
	[  +0.342813] kauditd_printk_skb: 215 callbacks suppressed
	[Aug29 20:16] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.849851] systemd-fstab-generator[3469]: Ignoring "noauto" option for root device
	
	
	==> etcd [4e923d3e9ad170220d10771bc8b8a9df4ed95e1c35c6fdae5d3353ab755612b1] <==
	{"level":"info","ts":"2024-08-29T20:15:58.114176Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e5cf977c4e262fb4","local-member-id":"d94bec2e0ded43ac","added-peer-id":"d94bec2e0ded43ac","added-peer-peer-urls":["https://192.168.39.140:2380"]}
	{"level":"info","ts":"2024-08-29T20:15:58.114288Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e5cf977c4e262fb4","local-member-id":"d94bec2e0ded43ac","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T20:15:58.114329Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T20:15:58.120708Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T20:15:58.125154Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-29T20:15:58.125311Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.140:2380"}
	{"level":"info","ts":"2024-08-29T20:15:58.127588Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.140:2380"}
	{"level":"info","ts":"2024-08-29T20:15:58.125440Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"d94bec2e0ded43ac","initial-advertise-peer-urls":["https://192.168.39.140:2380"],"listen-peer-urls":["https://192.168.39.140:2380"],"advertise-client-urls":["https://192.168.39.140:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.140:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-29T20:15:58.127681Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-29T20:15:59.086514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-29T20:15:59.086565Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-29T20:15:59.086601Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac received MsgPreVoteResp from d94bec2e0ded43ac at term 2"}
	{"level":"info","ts":"2024-08-29T20:15:59.086616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became candidate at term 3"}
	{"level":"info","ts":"2024-08-29T20:15:59.086621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac received MsgVoteResp from d94bec2e0ded43ac at term 3"}
	{"level":"info","ts":"2024-08-29T20:15:59.086629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became leader at term 3"}
	{"level":"info","ts":"2024-08-29T20:15:59.086637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d94bec2e0ded43ac elected leader d94bec2e0ded43ac at term 3"}
	{"level":"info","ts":"2024-08-29T20:15:59.094764Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d94bec2e0ded43ac","local-member-attributes":"{Name:kubernetes-upgrade-714305 ClientURLs:[https://192.168.39.140:2379]}","request-path":"/0/members/d94bec2e0ded43ac/attributes","cluster-id":"e5cf977c4e262fb4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-29T20:15:59.094924Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T20:15:59.095293Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T20:15:59.096036Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T20:15:59.098910Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.140:2379"}
	{"level":"info","ts":"2024-08-29T20:15:59.098986Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-29T20:15:59.099026Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-29T20:15:59.101656Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T20:15:59.102444Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [520bd13d85244d6c73becdc2c9f79a405c3cf3f1399b0a1cd2fefdaab10335df] <==
	{"level":"info","ts":"2024-08-29T20:15:25.026705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became leader at term 2"}
	{"level":"info","ts":"2024-08-29T20:15:25.026812Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d94bec2e0ded43ac elected leader d94bec2e0ded43ac at term 2"}
	{"level":"info","ts":"2024-08-29T20:15:25.028977Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T20:15:25.029832Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d94bec2e0ded43ac","local-member-attributes":"{Name:kubernetes-upgrade-714305 ClientURLs:[https://192.168.39.140:2379]}","request-path":"/0/members/d94bec2e0ded43ac/attributes","cluster-id":"e5cf977c4e262fb4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-29T20:15:25.029909Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T20:15:25.030602Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T20:15:25.031553Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e5cf977c4e262fb4","local-member-id":"d94bec2e0ded43ac","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T20:15:25.031710Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T20:15:25.031731Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T20:15:25.032525Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-29T20:15:25.032552Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-29T20:15:25.033014Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T20:15:25.034698Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-29T20:15:25.035276Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T20:15:25.035982Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.140:2379"}
	{"level":"info","ts":"2024-08-29T20:15:45.642728Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-29T20:15:45.642867Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-714305","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.140:2380"],"advertise-client-urls":["https://192.168.39.140:2379"]}
	{"level":"warn","ts":"2024-08-29T20:15:45.642984Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-29T20:15:45.643135Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-29T20:15:45.723450Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.140:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-29T20:15:45.723596Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.140:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-29T20:15:45.723683Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d94bec2e0ded43ac","current-leader-member-id":"d94bec2e0ded43ac"}
	{"level":"info","ts":"2024-08-29T20:15:45.726384Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.140:2380"}
	{"level":"info","ts":"2024-08-29T20:15:45.726644Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.140:2380"}
	{"level":"info","ts":"2024-08-29T20:15:45.726691Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-714305","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.140:2380"],"advertise-client-urls":["https://192.168.39.140:2379"]}
	
	
	==> kernel <==
	 20:16:06 up 1 min,  0 users,  load average: 1.34, 0.37, 0.13
	Linux kubernetes-upgrade-714305 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f89612ef0204d1f494113410422bf805fde005d3b3254f4d9bda1f002d758678] <==
	I0829 20:16:00.613748       1 policy_source.go:224] refreshing policies
	I0829 20:16:00.625810       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0829 20:16:00.648105       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0829 20:16:00.648185       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0829 20:16:00.648380       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0829 20:16:00.648811       1 shared_informer.go:320] Caches are synced for configmaps
	I0829 20:16:00.648987       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0829 20:16:00.649023       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0829 20:16:00.654014       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0829 20:16:00.660589       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0829 20:16:00.666719       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0829 20:16:00.682307       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0829 20:16:00.682417       1 aggregator.go:171] initial CRD sync complete...
	I0829 20:16:00.682444       1 autoregister_controller.go:144] Starting autoregister controller
	I0829 20:16:00.682571       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0829 20:16:00.682597       1 cache.go:39] Caches are synced for autoregister controller
	I0829 20:16:00.700894       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0829 20:16:01.557402       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0829 20:16:02.309753       1 controller.go:615] quota admission added evaluator for: endpoints
	I0829 20:16:03.108279       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0829 20:16:03.141739       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0829 20:16:03.206984       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0829 20:16:03.272993       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0829 20:16:03.286357       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0829 20:16:04.287034       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [feb04dfcf677b69c7944d54ff5a82340f203e2d83d573d32c9daa6b44286796f] <==
	I0829 20:15:28.310939       1 controller.go:615] quota admission added evaluator for: endpoints
	I0829 20:15:28.316064       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0829 20:15:28.417001       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0829 20:15:28.959976       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0829 20:15:28.985034       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0829 20:15:29.000174       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0829 20:15:33.667930       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0829 20:15:33.715008       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0829 20:15:45.637658       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0829 20:15:45.658597       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:15:45.658722       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:15:45.658777       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:15:45.658875       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:15:45.658927       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:15:45.659030       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:15:45.659093       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:15:45.659132       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:15:45.659319       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:15:45.664417       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0829 20:15:45.665224       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0829 20:15:45.666281       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:15:45.666760       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:15:45.666803       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:15:45.666895       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:15:45.666960       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [4effe6e868fdc78d1812be87825356899940838b4806bc44909bb83f9bcddb8d] <==
	I0829 20:15:33.266835       1 shared_informer.go:320] Caches are synced for PV protection
	I0829 20:15:33.292735       1 shared_informer.go:320] Caches are synced for taint
	I0829 20:15:33.292878       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0829 20:15:33.293009       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-714305"
	I0829 20:15:33.293097       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0829 20:15:33.309378       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-714305"
	I0829 20:15:33.317319       1 shared_informer.go:320] Caches are synced for crt configmap
	I0829 20:15:33.365107       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0829 20:15:33.366569       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0829 20:15:33.380853       1 shared_informer.go:320] Caches are synced for resource quota
	I0829 20:15:33.414900       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0829 20:15:33.415170       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-714305"
	I0829 20:15:33.416509       1 shared_informer.go:320] Caches are synced for endpoint
	I0829 20:15:33.422850       1 shared_informer.go:320] Caches are synced for resource quota
	I0829 20:15:33.577926       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-714305"
	I0829 20:15:33.815657       1 shared_informer.go:320] Caches are synced for garbage collector
	I0829 20:15:33.868432       1 shared_informer.go:320] Caches are synced for garbage collector
	I0829 20:15:33.868628       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0829 20:15:34.136875       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="461.243375ms"
	I0829 20:15:34.168983       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="32.034417ms"
	I0829 20:15:34.258559       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="89.498981ms"
	I0829 20:15:34.259090       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="316.47µs"
	I0829 20:15:35.736078       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="46.05µs"
	I0829 20:15:35.752857       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="50.56µs"
	I0829 20:15:36.787767       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-714305"
	
	
	==> kube-controller-manager [7dacf45f4c2f938aadcf2ea4dec7c1a3ee420a330540f4a73eb123a74a45fdde] <==
	I0829 20:16:04.087572       1 shared_informer.go:320] Caches are synced for deployment
	I0829 20:16:04.096737       1 shared_informer.go:320] Caches are synced for HPA
	I0829 20:16:04.099655       1 shared_informer.go:320] Caches are synced for taint
	I0829 20:16:04.099902       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0829 20:16:04.100021       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-714305"
	I0829 20:16:04.100094       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0829 20:16:04.103045       1 shared_informer.go:320] Caches are synced for persistent volume
	I0829 20:16:04.105841       1 shared_informer.go:320] Caches are synced for endpoint
	I0829 20:16:04.111748       1 shared_informer.go:320] Caches are synced for job
	I0829 20:16:04.119676       1 shared_informer.go:320] Caches are synced for stateful set
	I0829 20:16:04.123629       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0829 20:16:04.123777       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="71.834µs"
	I0829 20:16:04.127516       1 shared_informer.go:320] Caches are synced for cronjob
	I0829 20:16:04.136686       1 shared_informer.go:320] Caches are synced for service account
	I0829 20:16:04.142650       1 shared_informer.go:320] Caches are synced for namespace
	I0829 20:16:04.177965       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0829 20:16:04.178145       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-714305"
	I0829 20:16:04.205324       1 shared_informer.go:320] Caches are synced for resource quota
	I0829 20:16:04.209112       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0829 20:16:04.228675       1 shared_informer.go:320] Caches are synced for resource quota
	I0829 20:16:04.611180       1 shared_informer.go:320] Caches are synced for garbage collector
	I0829 20:16:04.680772       1 shared_informer.go:320] Caches are synced for garbage collector
	I0829 20:16:04.680802       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0829 20:16:05.243882       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="64.162087ms"
	I0829 20:16:05.244010       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="90.031µs"
	
	
	==> kube-proxy [458f49079510f6b409a2d30eef1ed195d3414196930b30309bcbc4cac116f8b7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 20:16:02.387538       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 20:16:02.400310       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.140"]
	E0829 20:16:02.400434       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 20:16:02.435765       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 20:16:02.435792       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 20:16:02.435812       1 server_linux.go:169] "Using iptables Proxier"
	I0829 20:16:02.438430       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 20:16:02.438868       1 server.go:483] "Version info" version="v1.31.0"
	I0829 20:16:02.438908       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 20:16:02.440104       1 config.go:197] "Starting service config controller"
	I0829 20:16:02.440304       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 20:16:02.440355       1 config.go:104] "Starting endpoint slice config controller"
	I0829 20:16:02.440409       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 20:16:02.440984       1 config.go:326] "Starting node config controller"
	I0829 20:16:02.441020       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 20:16:02.540801       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 20:16:02.540873       1 shared_informer.go:320] Caches are synced for service config
	I0829 20:16:02.541104       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [7f9545f04af49642850f679fc33fa10f744705ffb3df622985360374d23a0fbc] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 20:15:34.867741       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 20:15:34.903930       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.140"]
	E0829 20:15:34.904095       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 20:15:35.088836       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 20:15:35.089729       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 20:15:35.089851       1 server_linux.go:169] "Using iptables Proxier"
	I0829 20:15:35.130518       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 20:15:35.131984       1 server.go:483] "Version info" version="v1.31.0"
	I0829 20:15:35.132407       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 20:15:35.151202       1 config.go:197] "Starting service config controller"
	I0829 20:15:35.157698       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 20:15:35.171236       1 config.go:326] "Starting node config controller"
	I0829 20:15:35.171357       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 20:15:35.154803       1 config.go:104] "Starting endpoint slice config controller"
	I0829 20:15:35.206799       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 20:15:35.207909       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 20:15:35.259280       1 shared_informer.go:320] Caches are synced for service config
	I0829 20:15:35.271577       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [155341d527463dcba6d61a26912bbbe9170726deaf94af0666f8e5da9dbee20b] <==
	
	
	==> kube-scheduler [e348e1896424cc97edada10b250707e21f898afbe0d0f7904487f37211194418] <==
	I0829 20:15:58.966683       1 serving.go:386] Generated self-signed cert in-memory
	W0829 20:16:00.596356       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0829 20:16:00.596443       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0829 20:16:00.596453       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0829 20:16:00.596541       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0829 20:16:00.632346       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0829 20:16:00.632422       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 20:16:00.636950       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0829 20:16:00.637177       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0829 20:16:00.637272       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0829 20:16:00.637338       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0829 20:16:00.737619       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 20:15:57 kubernetes-upgrade-714305 kubelet[2921]: I0829 20:15:57.734669    2921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9efbf3217a0ebe469f4609fe2e695342-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-714305\" (UID: \"9efbf3217a0ebe469f4609fe2e695342\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-714305"
	Aug 29 20:15:57 kubernetes-upgrade-714305 kubelet[2921]: I0829 20:15:57.734689    2921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9efbf3217a0ebe469f4609fe2e695342-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-714305\" (UID: \"9efbf3217a0ebe469f4609fe2e695342\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-714305"
	Aug 29 20:15:57 kubernetes-upgrade-714305 kubelet[2921]: I0829 20:15:57.734705    2921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9efbf3217a0ebe469f4609fe2e695342-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-714305\" (UID: \"9efbf3217a0ebe469f4609fe2e695342\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-714305"
	Aug 29 20:15:57 kubernetes-upgrade-714305 kubelet[2921]: I0829 20:15:57.752918    2921 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-714305"
	Aug 29 20:15:57 kubernetes-upgrade-714305 kubelet[2921]: E0829 20:15:57.753797    2921 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.140:8443: connect: connection refused" node="kubernetes-upgrade-714305"
	Aug 29 20:15:57 kubernetes-upgrade-714305 kubelet[2921]: E0829 20:15:57.920951    2921 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-714305?timeout=10s\": dial tcp 192.168.39.140:8443: connect: connection refused" interval="800ms"
	Aug 29 20:15:57 kubernetes-upgrade-714305 kubelet[2921]: I0829 20:15:57.930637    2921 scope.go:117] "RemoveContainer" containerID="520bd13d85244d6c73becdc2c9f79a405c3cf3f1399b0a1cd2fefdaab10335df"
	Aug 29 20:15:57 kubernetes-upgrade-714305 kubelet[2921]: I0829 20:15:57.932870    2921 scope.go:117] "RemoveContainer" containerID="4effe6e868fdc78d1812be87825356899940838b4806bc44909bb83f9bcddb8d"
	Aug 29 20:15:57 kubernetes-upgrade-714305 kubelet[2921]: I0829 20:15:57.934015    2921 scope.go:117] "RemoveContainer" containerID="155341d527463dcba6d61a26912bbbe9170726deaf94af0666f8e5da9dbee20b"
	Aug 29 20:15:58 kubernetes-upgrade-714305 kubelet[2921]: I0829 20:15:58.154980    2921 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-714305"
	Aug 29 20:15:58 kubernetes-upgrade-714305 kubelet[2921]: E0829 20:15:58.156000    2921 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.140:8443: connect: connection refused" node="kubernetes-upgrade-714305"
	Aug 29 20:15:58 kubernetes-upgrade-714305 kubelet[2921]: W0829 20:15:58.350570    2921 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	Aug 29 20:15:58 kubernetes-upgrade-714305 kubelet[2921]: E0829 20:15:58.350657    2921 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.39.140:8443: connect: connection refused" logger="UnhandledError"
	Aug 29 20:15:58 kubernetes-upgrade-714305 kubelet[2921]: I0829 20:15:58.957523    2921 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-714305"
	Aug 29 20:16:00 kubernetes-upgrade-714305 kubelet[2921]: I0829 20:16:00.701087    2921 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-714305"
	Aug 29 20:16:00 kubernetes-upgrade-714305 kubelet[2921]: I0829 20:16:00.701181    2921 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-714305"
	Aug 29 20:16:00 kubernetes-upgrade-714305 kubelet[2921]: I0829 20:16:00.701203    2921 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 29 20:16:00 kubernetes-upgrade-714305 kubelet[2921]: I0829 20:16:00.702073    2921 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 29 20:16:01 kubernetes-upgrade-714305 kubelet[2921]: I0829 20:16:01.295933    2921 apiserver.go:52] "Watching apiserver"
	Aug 29 20:16:01 kubernetes-upgrade-714305 kubelet[2921]: I0829 20:16:01.328247    2921 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 29 20:16:01 kubernetes-upgrade-714305 kubelet[2921]: I0829 20:16:01.413391    2921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c11e0ab2-7c49-4129-bc9a-0a6c1a0f0db4-xtables-lock\") pod \"kube-proxy-5xqg5\" (UID: \"c11e0ab2-7c49-4129-bc9a-0a6c1a0f0db4\") " pod="kube-system/kube-proxy-5xqg5"
	Aug 29 20:16:01 kubernetes-upgrade-714305 kubelet[2921]: I0829 20:16:01.413445    2921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c11e0ab2-7c49-4129-bc9a-0a6c1a0f0db4-lib-modules\") pod \"kube-proxy-5xqg5\" (UID: \"c11e0ab2-7c49-4129-bc9a-0a6c1a0f0db4\") " pod="kube-system/kube-proxy-5xqg5"
	Aug 29 20:16:01 kubernetes-upgrade-714305 kubelet[2921]: I0829 20:16:01.413548    2921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/10d159db-80b4-45b9-95e8-17baf27c094d-tmp\") pod \"storage-provisioner\" (UID: \"10d159db-80b4-45b9-95e8-17baf27c094d\") " pod="kube-system/storage-provisioner"
	Aug 29 20:16:01 kubernetes-upgrade-714305 kubelet[2921]: E0829 20:16:01.509237    2921 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-kubernetes-upgrade-714305\" already exists" pod="kube-system/etcd-kubernetes-upgrade-714305"
	Aug 29 20:16:05 kubernetes-upgrade-714305 kubelet[2921]: I0829 20:16:05.155237    2921 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [2521102d484df5bab414cd05c6aec319d870d6b1dd6a2af9b0781a30a5fb7fe9] <==
	I0829 20:16:02.274727       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 20:16:02.295628       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 20:16:02.295708       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 20:16:02.326341       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 20:16:02.327008       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-714305_ff3edf3f-3896-418b-9320-37f2e27bc1e5!
	I0829 20:16:02.327674       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f6772f6c-4727-4e87-8f17-71067e2da013", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-714305_ff3edf3f-3896-418b-9320-37f2e27bc1e5 became leader
	I0829 20:16:02.428114       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-714305_ff3edf3f-3896-418b-9320-37f2e27bc1e5!
	
	
	==> storage-provisioner [43e87813da79efca3141b75f04586f5ddfd7529c690ee0fc6718ceb88ed79c02] <==
	I0829 20:15:34.744762       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 20:16:05.386914   60867 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19530-11185/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-714305 -n kubernetes-upgrade-714305
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-714305 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-714305" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-714305
--- FAIL: TestKubernetesUpgrade (370.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (274.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-032002 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-032002 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m34.3491842s)

                                                
                                                
-- stdout --
	* [old-k8s-version-032002] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19530
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Downloading driver docker-machine-driver-kvm2:
	* Starting "old-k8s-version-032002" primary control-plane node in "old-k8s-version-032002" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 20:16:08.572330   61858 out.go:345] Setting OutFile to fd 1 ...
	I0829 20:16:08.572834   61858 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:16:08.572846   61858 out.go:358] Setting ErrFile to fd 2...
	I0829 20:16:08.572853   61858 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:16:08.573382   61858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 20:16:08.574409   61858 out.go:352] Setting JSON to false
	I0829 20:16:08.575315   61858 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7116,"bootTime":1724955453,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 20:16:08.575378   61858 start.go:139] virtualization: kvm guest
	I0829 20:16:08.577168   61858 out.go:177] * [old-k8s-version-032002] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 20:16:08.578832   61858 out.go:177]   - MINIKUBE_LOCATION=19530
	I0829 20:16:08.578877   61858 notify.go:220] Checking for updates...
	I0829 20:16:08.581194   61858 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 20:16:08.582597   61858 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:16:08.583964   61858 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 20:16:08.585302   61858 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 20:16:08.586568   61858 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 20:16:08.588218   61858 config.go:182] Loaded profile config "cert-expiration-621378": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:16:08.588330   61858 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 20:16:08.628167   61858 out.go:177] * Using the kvm2 driver based on user configuration
	I0829 20:16:08.629600   61858 start.go:297] selected driver: kvm2
	I0829 20:16:08.629620   61858 start.go:901] validating driver "kvm2" against <nil>
	I0829 20:16:08.629635   61858 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 20:16:08.630640   61858 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 20:16:09.830039   61858 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19530-11185/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 20:16:09.861509   61858 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
	W0829 20:16:09.861560   61858 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.33.1
	I0829 20:16:09.863542   61858 out.go:177] * Downloading driver docker-machine-driver-kvm2:
	I0829 20:16:09.864910   61858 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.33.1/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.33.1/docker-machine-driver-kvm2-amd64.sha256 -> /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:16:11.140867   61858 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 20:16:11.141181   61858 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:16:11.141255   61858 cni.go:84] Creating CNI manager for ""
	I0829 20:16:11.141274   61858 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:16:11.141290   61858 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 20:16:11.141381   61858 start.go:340] cluster config:
	{Name:old-k8s-version-032002 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:16:11.141501   61858 iso.go:125] acquiring lock: {Name:mk1c9d3ac7f423dd4657884e37bdf4359f6328d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 20:16:11.143384   61858 out.go:177] * Starting "old-k8s-version-032002" primary control-plane node in "old-k8s-version-032002" cluster
	I0829 20:16:11.144522   61858 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 20:16:11.144562   61858 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0829 20:16:11.144571   61858 cache.go:56] Caching tarball of preloaded images
	I0829 20:16:11.144650   61858 preload.go:172] Found /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 20:16:11.144664   61858 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0829 20:16:11.144752   61858 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/config.json ...
	I0829 20:16:11.144769   61858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/config.json: {Name:mk9e46fc58052eda77f734f2fc6c86275a741969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:16:11.144917   61858 start.go:360] acquireMachinesLock for old-k8s-version-032002: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 20:16:11.144959   61858 start.go:364] duration metric: took 22.494µs to acquireMachinesLock for "old-k8s-version-032002"
	I0829 20:16:11.144979   61858 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-032002 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 20:16:11.145070   61858 start.go:125] createHost starting for "" (driver="kvm2")
	I0829 20:16:11.146795   61858 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 20:16:11.147012   61858 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:16:11.147066   61858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:16:11.163870   61858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35529
	I0829 20:16:11.164443   61858 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:16:11.165139   61858 main.go:141] libmachine: Using API Version  1
	I0829 20:16:11.165162   61858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:16:11.165685   61858 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:16:11.165969   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetMachineName
	I0829 20:16:11.166640   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:16:11.166872   61858 start.go:159] libmachine.API.Create for "old-k8s-version-032002" (driver="kvm2")
	I0829 20:16:11.166904   61858 client.go:168] LocalClient.Create starting
	I0829 20:16:11.166934   61858 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem
	I0829 20:16:11.166976   61858 main.go:141] libmachine: Decoding PEM data...
	I0829 20:16:11.166996   61858 main.go:141] libmachine: Parsing certificate...
	I0829 20:16:11.167056   61858 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem
	I0829 20:16:11.167080   61858 main.go:141] libmachine: Decoding PEM data...
	I0829 20:16:11.167092   61858 main.go:141] libmachine: Parsing certificate...
	I0829 20:16:11.167113   61858 main.go:141] libmachine: Running pre-create checks...
	I0829 20:16:11.167123   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .PreCreateCheck
	I0829 20:16:11.167545   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetConfigRaw
	I0829 20:16:11.167983   61858 main.go:141] libmachine: Creating machine...
	I0829 20:16:11.167997   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .Create
	I0829 20:16:11.168141   61858 main.go:141] libmachine: (old-k8s-version-032002) Creating KVM machine...
	I0829 20:16:11.170130   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | found existing default KVM network
	I0829 20:16:11.171373   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:16:11.171104   62590 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012db50}
	I0829 20:16:11.171423   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | created network xml: 
	I0829 20:16:11.171443   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | <network>
	I0829 20:16:11.171452   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG |   <name>mk-old-k8s-version-032002</name>
	I0829 20:16:11.171465   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG |   <dns enable='no'/>
	I0829 20:16:11.171495   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG |   
	I0829 20:16:11.171520   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0829 20:16:11.171544   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG |     <dhcp>
	I0829 20:16:11.171557   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0829 20:16:11.171570   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG |     </dhcp>
	I0829 20:16:11.171582   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG |   </ip>
	I0829 20:16:11.171594   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG |   
	I0829 20:16:11.171618   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | </network>
	I0829 20:16:11.171633   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | 
	I0829 20:16:11.177671   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | trying to create private KVM network mk-old-k8s-version-032002 192.168.39.0/24...
	I0829 20:16:11.255253   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | private KVM network mk-old-k8s-version-032002 192.168.39.0/24 created
	I0829 20:16:11.255281   61858 main.go:141] libmachine: (old-k8s-version-032002) Setting up store path in /home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002 ...
	I0829 20:16:11.255294   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:16:11.255235   62590 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 20:16:11.255321   61858 main.go:141] libmachine: (old-k8s-version-032002) Building disk image from file:///home/jenkins/minikube-integration/19530-11185/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso
	I0829 20:16:11.255489   61858 main.go:141] libmachine: (old-k8s-version-032002) Downloading /home/jenkins/minikube-integration/19530-11185/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19530-11185/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso...
	I0829 20:16:11.524278   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:16:11.524168   62590 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa...
	I0829 20:16:11.603669   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:16:11.603569   62590 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/old-k8s-version-032002.rawdisk...
	I0829 20:16:11.603751   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | Writing magic tar header
	I0829 20:16:11.603777   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | Writing SSH key tar header
	I0829 20:16:11.603793   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:16:11.603758   62590 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002 ...
	I0829 20:16:11.603906   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002
	I0829 20:16:11.603942   61858 main.go:141] libmachine: (old-k8s-version-032002) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002 (perms=drwx------)
	I0829 20:16:11.603955   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube/machines
	I0829 20:16:11.603986   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 20:16:11.603998   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185
	I0829 20:16:11.604013   61858 main.go:141] libmachine: (old-k8s-version-032002) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube/machines (perms=drwxr-xr-x)
	I0829 20:16:11.604030   61858 main.go:141] libmachine: (old-k8s-version-032002) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube (perms=drwxr-xr-x)
	I0829 20:16:11.604045   61858 main.go:141] libmachine: (old-k8s-version-032002) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185 (perms=drwxrwxr-x)
	I0829 20:16:11.604061   61858 main.go:141] libmachine: (old-k8s-version-032002) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0829 20:16:11.604080   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0829 20:16:11.604099   61858 main.go:141] libmachine: (old-k8s-version-032002) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0829 20:16:11.604109   61858 main.go:141] libmachine: (old-k8s-version-032002) Creating domain...
	I0829 20:16:11.604119   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | Checking permissions on dir: /home/jenkins
	I0829 20:16:11.604133   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | Checking permissions on dir: /home
	I0829 20:16:11.604144   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | Skipping /home - not owner
	I0829 20:16:11.605442   61858 main.go:141] libmachine: (old-k8s-version-032002) define libvirt domain using xml: 
	I0829 20:16:11.605467   61858 main.go:141] libmachine: (old-k8s-version-032002) <domain type='kvm'>
	I0829 20:16:11.605477   61858 main.go:141] libmachine: (old-k8s-version-032002)   <name>old-k8s-version-032002</name>
	I0829 20:16:11.605492   61858 main.go:141] libmachine: (old-k8s-version-032002)   <memory unit='MiB'>2200</memory>
	I0829 20:16:11.605505   61858 main.go:141] libmachine: (old-k8s-version-032002)   <vcpu>2</vcpu>
	I0829 20:16:11.605512   61858 main.go:141] libmachine: (old-k8s-version-032002)   <features>
	I0829 20:16:11.605525   61858 main.go:141] libmachine: (old-k8s-version-032002)     <acpi/>
	I0829 20:16:11.605535   61858 main.go:141] libmachine: (old-k8s-version-032002)     <apic/>
	I0829 20:16:11.605544   61858 main.go:141] libmachine: (old-k8s-version-032002)     <pae/>
	I0829 20:16:11.605554   61858 main.go:141] libmachine: (old-k8s-version-032002)     
	I0829 20:16:11.605566   61858 main.go:141] libmachine: (old-k8s-version-032002)   </features>
	I0829 20:16:11.605578   61858 main.go:141] libmachine: (old-k8s-version-032002)   <cpu mode='host-passthrough'>
	I0829 20:16:11.605591   61858 main.go:141] libmachine: (old-k8s-version-032002)   
	I0829 20:16:11.605598   61858 main.go:141] libmachine: (old-k8s-version-032002)   </cpu>
	I0829 20:16:11.605609   61858 main.go:141] libmachine: (old-k8s-version-032002)   <os>
	I0829 20:16:11.605619   61858 main.go:141] libmachine: (old-k8s-version-032002)     <type>hvm</type>
	I0829 20:16:11.605627   61858 main.go:141] libmachine: (old-k8s-version-032002)     <boot dev='cdrom'/>
	I0829 20:16:11.605643   61858 main.go:141] libmachine: (old-k8s-version-032002)     <boot dev='hd'/>
	I0829 20:16:11.605678   61858 main.go:141] libmachine: (old-k8s-version-032002)     <bootmenu enable='no'/>
	I0829 20:16:11.605701   61858 main.go:141] libmachine: (old-k8s-version-032002)   </os>
	I0829 20:16:11.605715   61858 main.go:141] libmachine: (old-k8s-version-032002)   <devices>
	I0829 20:16:11.605727   61858 main.go:141] libmachine: (old-k8s-version-032002)     <disk type='file' device='cdrom'>
	I0829 20:16:11.605773   61858 main.go:141] libmachine: (old-k8s-version-032002)       <source file='/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/boot2docker.iso'/>
	I0829 20:16:11.605798   61858 main.go:141] libmachine: (old-k8s-version-032002)       <target dev='hdc' bus='scsi'/>
	I0829 20:16:11.605813   61858 main.go:141] libmachine: (old-k8s-version-032002)       <readonly/>
	I0829 20:16:11.605825   61858 main.go:141] libmachine: (old-k8s-version-032002)     </disk>
	I0829 20:16:11.605836   61858 main.go:141] libmachine: (old-k8s-version-032002)     <disk type='file' device='disk'>
	I0829 20:16:11.605855   61858 main.go:141] libmachine: (old-k8s-version-032002)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0829 20:16:11.605877   61858 main.go:141] libmachine: (old-k8s-version-032002)       <source file='/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/old-k8s-version-032002.rawdisk'/>
	I0829 20:16:11.605889   61858 main.go:141] libmachine: (old-k8s-version-032002)       <target dev='hda' bus='virtio'/>
	I0829 20:16:11.605899   61858 main.go:141] libmachine: (old-k8s-version-032002)     </disk>
	I0829 20:16:11.605914   61858 main.go:141] libmachine: (old-k8s-version-032002)     <interface type='network'>
	I0829 20:16:11.605942   61858 main.go:141] libmachine: (old-k8s-version-032002)       <source network='mk-old-k8s-version-032002'/>
	I0829 20:16:11.606041   61858 main.go:141] libmachine: (old-k8s-version-032002)       <model type='virtio'/>
	I0829 20:16:11.606058   61858 main.go:141] libmachine: (old-k8s-version-032002)     </interface>
	I0829 20:16:11.606072   61858 main.go:141] libmachine: (old-k8s-version-032002)     <interface type='network'>
	I0829 20:16:11.606084   61858 main.go:141] libmachine: (old-k8s-version-032002)       <source network='default'/>
	I0829 20:16:11.606094   61858 main.go:141] libmachine: (old-k8s-version-032002)       <model type='virtio'/>
	I0829 20:16:11.606101   61858 main.go:141] libmachine: (old-k8s-version-032002)     </interface>
	I0829 20:16:11.606125   61858 main.go:141] libmachine: (old-k8s-version-032002)     <serial type='pty'>
	I0829 20:16:11.606146   61858 main.go:141] libmachine: (old-k8s-version-032002)       <target port='0'/>
	I0829 20:16:11.606158   61858 main.go:141] libmachine: (old-k8s-version-032002)     </serial>
	I0829 20:16:11.606168   61858 main.go:141] libmachine: (old-k8s-version-032002)     <console type='pty'>
	I0829 20:16:11.606181   61858 main.go:141] libmachine: (old-k8s-version-032002)       <target type='serial' port='0'/>
	I0829 20:16:11.606191   61858 main.go:141] libmachine: (old-k8s-version-032002)     </console>
	I0829 20:16:11.606205   61858 main.go:141] libmachine: (old-k8s-version-032002)     <rng model='virtio'>
	I0829 20:16:11.606215   61858 main.go:141] libmachine: (old-k8s-version-032002)       <backend model='random'>/dev/random</backend>
	I0829 20:16:11.606224   61858 main.go:141] libmachine: (old-k8s-version-032002)     </rng>
	I0829 20:16:11.606236   61858 main.go:141] libmachine: (old-k8s-version-032002)     
	I0829 20:16:11.606245   61858 main.go:141] libmachine: (old-k8s-version-032002)     
	I0829 20:16:11.606253   61858 main.go:141] libmachine: (old-k8s-version-032002)   </devices>
	I0829 20:16:11.606261   61858 main.go:141] libmachine: (old-k8s-version-032002) </domain>
	I0829 20:16:11.606291   61858 main.go:141] libmachine: (old-k8s-version-032002) 
	I0829 20:16:11.610796   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:96:4b:bd in network default
	I0829 20:16:11.611660   61858 main.go:141] libmachine: (old-k8s-version-032002) Ensuring networks are active...
	I0829 20:16:11.611681   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:11.612667   61858 main.go:141] libmachine: (old-k8s-version-032002) Ensuring network default is active
	I0829 20:16:11.613083   61858 main.go:141] libmachine: (old-k8s-version-032002) Ensuring network mk-old-k8s-version-032002 is active
	I0829 20:16:11.613614   61858 main.go:141] libmachine: (old-k8s-version-032002) Getting domain xml...
	I0829 20:16:11.614376   61858 main.go:141] libmachine: (old-k8s-version-032002) Creating domain...
	I0829 20:16:13.116331   61858 main.go:141] libmachine: (old-k8s-version-032002) Waiting to get IP...
	I0829 20:16:13.117236   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:13.117679   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:16:13.117740   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:16:13.117666   62590 retry.go:31] will retry after 198.951949ms: waiting for machine to come up
	I0829 20:16:13.318339   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:13.318964   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:16:13.318994   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:16:13.318900   62590 retry.go:31] will retry after 237.314194ms: waiting for machine to come up
	I0829 20:16:13.557489   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:13.558011   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:16:13.558042   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:16:13.557967   62590 retry.go:31] will retry after 440.132537ms: waiting for machine to come up
	I0829 20:16:13.999570   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:14.000080   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:16:14.000119   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:16:14.000057   62590 retry.go:31] will retry after 518.427431ms: waiting for machine to come up
	I0829 20:16:14.519658   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:14.520175   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:16:14.520202   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:16:14.520134   62590 retry.go:31] will retry after 543.654878ms: waiting for machine to come up
	I0829 20:16:15.066130   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:15.066739   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:16:15.066778   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:16:15.066714   62590 retry.go:31] will retry after 632.425588ms: waiting for machine to come up
	I0829 20:16:15.700481   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:15.701005   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:16:15.701046   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:16:15.700939   62590 retry.go:31] will retry after 865.615308ms: waiting for machine to come up
	I0829 20:16:16.568584   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:16.569004   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:16:16.569031   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:16:16.568959   62590 retry.go:31] will retry after 1.165821145s: waiting for machine to come up
	I0829 20:16:17.736470   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:17.736971   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:16:17.737000   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:16:17.736922   62590 retry.go:31] will retry after 1.764094552s: waiting for machine to come up
	I0829 20:16:19.503962   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:19.504422   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:16:19.504444   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:16:19.504402   62590 retry.go:31] will retry after 1.889654337s: waiting for machine to come up
	I0829 20:16:21.395973   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:21.396532   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:16:21.396583   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:16:21.396485   62590 retry.go:31] will retry after 2.725787985s: waiting for machine to come up
	I0829 20:16:24.125379   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:24.125829   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:16:24.125855   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:16:24.125786   62590 retry.go:31] will retry after 3.209715166s: waiting for machine to come up
	I0829 20:16:27.337363   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:27.337751   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:16:27.337778   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:16:27.337714   62590 retry.go:31] will retry after 3.442723049s: waiting for machine to come up
	I0829 20:16:30.784226   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:30.784600   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:16:30.784630   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:16:30.784551   62590 retry.go:31] will retry after 3.497524979s: waiting for machine to come up
	I0829 20:16:34.285704   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:34.286193   61858 main.go:141] libmachine: (old-k8s-version-032002) Found IP for machine: 192.168.39.116
	I0829 20:16:34.286220   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has current primary IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:34.286229   61858 main.go:141] libmachine: (old-k8s-version-032002) Reserving static IP address...
	I0829 20:16:34.286578   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-032002", mac: "52:54:00:a8:ca:96", ip: "192.168.39.116"} in network mk-old-k8s-version-032002
	I0829 20:16:34.363024   61858 main.go:141] libmachine: (old-k8s-version-032002) Reserved static IP address: 192.168.39.116
	I0829 20:16:34.363052   61858 main.go:141] libmachine: (old-k8s-version-032002) Waiting for SSH to be available...
	I0829 20:16:34.363062   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | Getting to WaitForSSH function...
	I0829 20:16:34.365578   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:34.365943   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002
	I0829 20:16:34.365985   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find defined IP address of network mk-old-k8s-version-032002 interface with MAC address 52:54:00:a8:ca:96
	I0829 20:16:34.366166   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | Using SSH client type: external
	I0829 20:16:34.366192   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa (-rw-------)
	I0829 20:16:34.366241   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:16:34.366262   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | About to run SSH command:
	I0829 20:16:34.366279   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | exit 0
	I0829 20:16:34.369995   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | SSH cmd err, output: exit status 255: 
	I0829 20:16:34.370023   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0829 20:16:34.370034   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | command : exit 0
	I0829 20:16:34.370047   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | err     : exit status 255
	I0829 20:16:34.370061   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | output  : 
	I0829 20:16:37.370414   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | Getting to WaitForSSH function...
	I0829 20:16:37.374169   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:37.374679   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:16:25 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:16:37.374703   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:37.374860   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | Using SSH client type: external
	I0829 20:16:37.374889   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa (-rw-------)
	I0829 20:16:37.374914   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:16:37.374929   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | About to run SSH command:
	I0829 20:16:37.374946   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | exit 0
	I0829 20:16:37.502457   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | SSH cmd err, output: <nil>: 
	I0829 20:16:37.502780   61858 main.go:141] libmachine: (old-k8s-version-032002) KVM machine creation complete!
	I0829 20:16:37.503128   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetConfigRaw
	I0829 20:16:37.503650   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:16:37.503845   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:16:37.504011   61858 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0829 20:16:37.504026   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetState
	I0829 20:16:37.505112   61858 main.go:141] libmachine: Detecting operating system of created instance...
	I0829 20:16:37.505128   61858 main.go:141] libmachine: Waiting for SSH to be available...
	I0829 20:16:37.505133   61858 main.go:141] libmachine: Getting to WaitForSSH function...
	I0829 20:16:37.505141   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:16:37.507215   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:37.507511   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:16:25 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:16:37.507538   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:37.507688   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:16:37.507864   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:16:37.508009   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:16:37.508142   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:16:37.508296   61858 main.go:141] libmachine: Using SSH client type: native
	I0829 20:16:37.508500   61858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:16:37.508512   61858 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0829 20:16:37.617799   61858 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:16:37.617827   61858 main.go:141] libmachine: Detecting the provisioner...
	I0829 20:16:37.617837   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:16:37.620591   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:37.621013   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:16:25 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:16:37.621053   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:37.621212   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:16:37.621382   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:16:37.621559   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:16:37.621731   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:16:37.621927   61858 main.go:141] libmachine: Using SSH client type: native
	I0829 20:16:37.622133   61858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:16:37.622149   61858 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0829 20:16:37.731073   61858 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0829 20:16:37.731177   61858 main.go:141] libmachine: found compatible host: buildroot
	I0829 20:16:37.731191   61858 main.go:141] libmachine: Provisioning with buildroot...
	I0829 20:16:37.731204   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetMachineName
	I0829 20:16:37.731444   61858 buildroot.go:166] provisioning hostname "old-k8s-version-032002"
	I0829 20:16:37.731476   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetMachineName
	I0829 20:16:37.731658   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:16:37.734710   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:37.735133   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:16:25 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:16:37.735162   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:37.735281   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:16:37.735472   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:16:37.735657   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:16:37.735802   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:16:37.735976   61858 main.go:141] libmachine: Using SSH client type: native
	I0829 20:16:37.736189   61858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:16:37.736208   61858 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-032002 && echo "old-k8s-version-032002" | sudo tee /etc/hostname
	I0829 20:16:37.862417   61858 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-032002
	
	I0829 20:16:37.862440   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:16:37.865152   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:37.865524   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:16:25 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:16:37.865555   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:37.865701   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:16:37.865908   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:16:37.866121   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:16:37.866321   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:16:37.866529   61858 main.go:141] libmachine: Using SSH client type: native
	I0829 20:16:37.866794   61858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:16:37.866816   61858 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-032002' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-032002/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-032002' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:16:37.987712   61858 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:16:37.987772   61858 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:16:37.987804   61858 buildroot.go:174] setting up certificates
	I0829 20:16:37.987819   61858 provision.go:84] configureAuth start
	I0829 20:16:37.987835   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetMachineName
	I0829 20:16:37.988103   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:16:37.990810   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:37.991169   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:16:25 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:16:37.991203   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:37.991294   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:16:37.993281   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:37.993580   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:16:25 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:16:37.993617   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:37.993722   61858 provision.go:143] copyHostCerts
	I0829 20:16:37.993780   61858 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:16:37.993799   61858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:16:37.993889   61858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:16:37.994054   61858 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:16:37.994067   61858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:16:37.994108   61858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:16:37.994199   61858 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:16:37.994209   61858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:16:37.994260   61858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:16:37.994340   61858 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-032002 san=[127.0.0.1 192.168.39.116 localhost minikube old-k8s-version-032002]
	I0829 20:16:38.255026   61858 provision.go:177] copyRemoteCerts
	I0829 20:16:38.255094   61858 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:16:38.255115   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:16:38.258117   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:38.258477   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:16:25 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:16:38.258508   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:38.258668   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:16:38.258866   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:16:38.259009   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:16:38.259117   61858 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:16:38.344747   61858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 20:16:38.369409   61858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:16:38.394132   61858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0829 20:16:38.418123   61858 provision.go:87] duration metric: took 430.286361ms to configureAuth
	I0829 20:16:38.418156   61858 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:16:38.418317   61858 config.go:182] Loaded profile config "old-k8s-version-032002": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0829 20:16:38.418394   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:16:38.421019   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:38.421337   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:16:25 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:16:38.421374   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:38.421600   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:16:38.421827   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:16:38.421976   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:16:38.422093   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:16:38.422238   61858 main.go:141] libmachine: Using SSH client type: native
	I0829 20:16:38.422404   61858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:16:38.422418   61858 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:16:38.648990   61858 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:16:38.649012   61858 main.go:141] libmachine: Checking connection to Docker...
	I0829 20:16:38.649026   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetURL
	I0829 20:16:38.650243   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | Using libvirt version 6000000
	I0829 20:16:38.652502   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:38.652864   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:16:25 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:16:38.652884   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:38.653031   61858 main.go:141] libmachine: Docker is up and running!
	I0829 20:16:38.653045   61858 main.go:141] libmachine: Reticulating splines...
	I0829 20:16:38.653059   61858 client.go:171] duration metric: took 27.486147084s to LocalClient.Create
	I0829 20:16:38.653090   61858 start.go:167] duration metric: took 27.486219082s to libmachine.API.Create "old-k8s-version-032002"
	I0829 20:16:38.653105   61858 start.go:293] postStartSetup for "old-k8s-version-032002" (driver="kvm2")
	I0829 20:16:38.653119   61858 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:16:38.653143   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:16:38.653401   61858 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:16:38.653425   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:16:38.655587   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:38.655934   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:16:25 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:16:38.655955   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:38.656102   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:16:38.656264   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:16:38.656380   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:16:38.656526   61858 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:16:38.741231   61858 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:16:38.745199   61858 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:16:38.745226   61858 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:16:38.745283   61858 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:16:38.745352   61858 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:16:38.745436   61858 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:16:38.754642   61858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:16:38.779595   61858 start.go:296] duration metric: took 126.475668ms for postStartSetup
	I0829 20:16:38.779665   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetConfigRaw
	I0829 20:16:38.780244   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:16:38.782860   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:38.783296   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:16:25 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:16:38.783330   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:38.783546   61858 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/config.json ...
	I0829 20:16:38.783755   61858 start.go:128] duration metric: took 27.638674971s to createHost
	I0829 20:16:38.783784   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:16:38.786008   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:38.786296   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:16:25 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:16:38.786323   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:38.786445   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:16:38.786664   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:16:38.786820   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:16:38.786959   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:16:38.787142   61858 main.go:141] libmachine: Using SSH client type: native
	I0829 20:16:38.787301   61858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:16:38.787312   61858 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:16:38.903094   61858 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724962598.883732954
	
	I0829 20:16:38.903125   61858 fix.go:216] guest clock: 1724962598.883732954
	I0829 20:16:38.903138   61858 fix.go:229] Guest: 2024-08-29 20:16:38.883732954 +0000 UTC Remote: 2024-08-29 20:16:38.783771912 +0000 UTC m=+30.250334788 (delta=99.961042ms)
	I0829 20:16:38.903190   61858 fix.go:200] guest clock delta is within tolerance: 99.961042ms
	I0829 20:16:38.903197   61858 start.go:83] releasing machines lock for "old-k8s-version-032002", held for 27.758227721s
	I0829 20:16:38.903229   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:16:38.903535   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:16:38.906562   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:38.907029   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:16:25 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:16:38.907068   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:38.907214   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:16:38.907694   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:16:38.907933   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:16:38.908040   61858 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:16:38.908085   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:16:38.908134   61858 ssh_runner.go:195] Run: cat /version.json
	I0829 20:16:38.908161   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:16:38.910921   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:38.911103   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:38.911309   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:16:25 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:16:38.911344   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:38.911497   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:16:38.911609   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:16:25 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:16:38.911641   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:38.911672   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:16:38.911828   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:16:38.911828   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:16:38.912017   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:16:38.912011   61858 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:16:38.912166   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:16:38.912298   61858 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:16:38.991920   61858 ssh_runner.go:195] Run: systemctl --version
	I0829 20:16:39.022246   61858 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:16:39.183832   61858 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:16:39.189926   61858 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:16:39.190006   61858 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:16:39.207304   61858 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:16:39.207330   61858 start.go:495] detecting cgroup driver to use...
	I0829 20:16:39.207403   61858 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:16:39.224020   61858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:16:39.240953   61858 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:16:39.241023   61858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:16:39.257422   61858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:16:39.273879   61858 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:16:39.396026   61858 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:16:39.544025   61858 docker.go:233] disabling docker service ...
	I0829 20:16:39.544100   61858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:16:39.559169   61858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:16:39.572296   61858 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:16:39.683498   61858 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:16:39.811528   61858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:16:39.826666   61858 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:16:39.847698   61858 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0829 20:16:39.847763   61858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:16:39.858754   61858 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:16:39.858817   61858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:16:39.869416   61858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:16:39.879801   61858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:16:39.890613   61858 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:16:39.904143   61858 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:16:39.916184   61858 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:16:39.916244   61858 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:16:39.930131   61858 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:16:39.939520   61858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:16:40.051845   61858 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:16:40.146912   61858 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:16:40.147014   61858 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:16:40.152473   61858 start.go:563] Will wait 60s for crictl version
	I0829 20:16:40.152548   61858 ssh_runner.go:195] Run: which crictl
	I0829 20:16:40.156532   61858 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:16:40.201494   61858 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:16:40.201605   61858 ssh_runner.go:195] Run: crio --version
	I0829 20:16:40.232274   61858 ssh_runner.go:195] Run: crio --version
	I0829 20:16:40.262918   61858 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0829 20:16:40.264141   61858 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:16:40.267388   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:40.267859   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:16:25 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:16:40.267885   61858 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:16:40.268119   61858 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 20:16:40.272415   61858 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:16:40.286468   61858 kubeadm.go:883] updating cluster {Name:old-k8s-version-032002 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:16:40.286608   61858 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 20:16:40.286662   61858 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:16:40.318233   61858 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 20:16:40.318309   61858 ssh_runner.go:195] Run: which lz4
	I0829 20:16:40.322527   61858 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 20:16:40.327005   61858 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 20:16:40.327033   61858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0829 20:16:42.014087   61858 crio.go:462] duration metric: took 1.691594708s to copy over tarball
	I0829 20:16:42.014163   61858 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 20:16:44.644889   61858 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.630678862s)
	I0829 20:16:44.644923   61858 crio.go:469] duration metric: took 2.630808469s to extract the tarball
	I0829 20:16:44.644932   61858 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 20:16:44.687721   61858 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:16:44.733440   61858 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 20:16:44.733471   61858 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 20:16:44.733524   61858 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:16:44.733560   61858 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:16:44.733576   61858 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:16:44.733611   61858 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:16:44.733662   61858 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:16:44.733677   61858 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0829 20:16:44.733683   61858 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0829 20:16:44.733742   61858 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:16:44.735378   61858 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:16:44.735453   61858 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0829 20:16:44.735506   61858 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:16:44.735526   61858 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0829 20:16:44.735642   61858 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:16:44.735689   61858 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:16:44.735379   61858 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:16:44.735745   61858 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:16:44.906384   61858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0829 20:16:44.908263   61858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:16:44.908941   61858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0829 20:16:44.909589   61858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:16:44.916259   61858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:16:44.932762   61858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0829 20:16:45.001990   61858 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0829 20:16:45.002042   61858 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:16:45.002095   61858 ssh_runner.go:195] Run: which crictl
	I0829 20:16:45.023699   61858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:16:45.054600   61858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:16:45.061553   61858 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0829 20:16:45.061597   61858 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:16:45.061654   61858 ssh_runner.go:195] Run: which crictl
	I0829 20:16:45.069743   61858 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0829 20:16:45.069790   61858 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0829 20:16:45.069793   61858 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0829 20:16:45.069825   61858 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:16:45.069875   61858 ssh_runner.go:195] Run: which crictl
	I0829 20:16:45.069886   61858 ssh_runner.go:195] Run: which crictl
	I0829 20:16:45.069932   61858 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0829 20:16:45.069962   61858 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:16:45.070006   61858 ssh_runner.go:195] Run: which crictl
	I0829 20:16:45.096225   61858 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0829 20:16:45.096268   61858 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0829 20:16:45.096276   61858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:16:45.096309   61858 ssh_runner.go:195] Run: which crictl
	I0829 20:16:45.162049   61858 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0829 20:16:45.162091   61858 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:16:45.162141   61858 ssh_runner.go:195] Run: which crictl
	I0829 20:16:45.265008   61858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:16:45.265084   61858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:16:45.265120   61858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:16:45.265092   61858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:16:45.265176   61858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:16:45.265185   61858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:16:45.265251   61858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:16:45.408158   61858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:16:45.425218   61858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:16:45.425299   61858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:16:45.426780   61858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:16:45.426925   61858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:16:45.426989   61858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:16:45.427082   61858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:16:45.557994   61858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:16:45.586753   61858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:16:45.586802   61858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:16:45.586804   61858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:16:45.586879   61858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:16:45.586890   61858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:16:45.586939   61858 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0829 20:16:45.640236   61858 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0829 20:16:45.731704   61858 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0829 20:16:45.731787   61858 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0829 20:16:45.731810   61858 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0829 20:16:45.733767   61858 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0829 20:16:45.733789   61858 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0829 20:16:45.733838   61858 cache_images.go:92] duration metric: took 1.000353232s to LoadCachedImages
	W0829 20:16:45.733908   61858 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0829 20:16:45.733923   61858 kubeadm.go:934] updating node { 192.168.39.116 8443 v1.20.0 crio true true} ...
	I0829 20:16:45.734089   61858 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-032002 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:16:45.734179   61858 ssh_runner.go:195] Run: crio config
	I0829 20:16:45.785286   61858 cni.go:84] Creating CNI manager for ""
	I0829 20:16:45.785310   61858 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:16:45.785332   61858 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:16:45.785358   61858 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.116 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-032002 NodeName:old-k8s-version-032002 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0829 20:16:45.785527   61858 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-032002"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:16:45.785599   61858 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0829 20:16:45.796549   61858 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:16:45.796628   61858 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:16:45.806867   61858 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0829 20:16:45.826717   61858 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:16:45.845554   61858 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0829 20:16:45.864838   61858 ssh_runner.go:195] Run: grep 192.168.39.116	control-plane.minikube.internal$ /etc/hosts
	I0829 20:16:45.869121   61858 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:16:45.882235   61858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:16:46.006442   61858 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:16:46.025218   61858 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002 for IP: 192.168.39.116
	I0829 20:16:46.025245   61858 certs.go:194] generating shared ca certs ...
	I0829 20:16:46.025265   61858 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:16:46.025447   61858 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:16:46.025506   61858 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:16:46.025522   61858 certs.go:256] generating profile certs ...
	I0829 20:16:46.025594   61858 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.key
	I0829 20:16:46.025615   61858 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.crt with IP's: []
	I0829 20:16:46.158845   61858 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.crt ...
	I0829 20:16:46.158871   61858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.crt: {Name:mk8636951b4f1b4afe15e8646be77fcee688ce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:16:46.159030   61858 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.key ...
	I0829 20:16:46.159045   61858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.key: {Name:mkc4f8e085f91f786914b5bff0dd2fcf761de666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:16:46.159119   61858 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.key.a1a2aebb
	I0829 20:16:46.159134   61858 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.crt.a1a2aebb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.116]
	I0829 20:16:46.215427   61858 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.crt.a1a2aebb ...
	I0829 20:16:46.215458   61858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.crt.a1a2aebb: {Name:mk851abde9a4af84a2faaa41ce88a658fb8acf0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:16:46.215645   61858 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.key.a1a2aebb ...
	I0829 20:16:46.215666   61858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.key.a1a2aebb: {Name:mk816638087e85751d5fde787565a10184e5aacf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:16:46.215763   61858 certs.go:381] copying /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.crt.a1a2aebb -> /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.crt
	I0829 20:16:46.215856   61858 certs.go:385] copying /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.key.a1a2aebb -> /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.key
	I0829 20:16:46.215936   61858 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.key
	I0829 20:16:46.215958   61858 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.crt with IP's: []
	I0829 20:16:46.314206   61858 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.crt ...
	I0829 20:16:46.314241   61858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.crt: {Name:mk6b9870e8610969c6c8d46a4f5367e2215d7a8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:16:46.314422   61858 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.key ...
	I0829 20:16:46.314440   61858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.key: {Name:mkc0bea992e1ffddfce86115d24dca460d79bfdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:16:46.314664   61858 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:16:46.314706   61858 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:16:46.314716   61858 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:16:46.314736   61858 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:16:46.314779   61858 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:16:46.314812   61858 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:16:46.314848   61858 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:16:46.315530   61858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:16:46.346930   61858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:16:46.374205   61858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:16:46.402326   61858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:16:46.429740   61858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0829 20:16:46.454242   61858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 20:16:46.478427   61858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:16:46.503275   61858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 20:16:46.527415   61858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:16:46.551175   61858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:16:46.577218   61858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:16:46.603468   61858 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:16:46.622237   61858 ssh_runner.go:195] Run: openssl version
	I0829 20:16:46.628485   61858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:16:46.642039   61858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:16:46.647053   61858 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:16:46.647122   61858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:16:46.653333   61858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:16:46.665990   61858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:16:46.678390   61858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:16:46.684197   61858 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:16:46.684266   61858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:16:46.690942   61858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:16:46.702330   61858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:16:46.713626   61858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:16:46.718322   61858 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:16:46.718399   61858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:16:46.724379   61858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:16:46.735494   61858 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:16:46.740002   61858 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 20:16:46.740061   61858 kubeadm.go:392] StartCluster: {Name:old-k8s-version-032002 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:16:46.740138   61858 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:16:46.740185   61858 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:16:46.783291   61858 cri.go:89] found id: ""
	I0829 20:16:46.783402   61858 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:16:46.794156   61858 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:16:46.804706   61858 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:16:46.815006   61858 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:16:46.815029   61858 kubeadm.go:157] found existing configuration files:
	
	I0829 20:16:46.815096   61858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:16:46.824694   61858 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:16:46.824785   61858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:16:46.835906   61858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:16:46.845671   61858 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:16:46.845748   61858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:16:46.855979   61858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:16:46.867837   61858 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:16:46.867923   61858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:16:46.881261   61858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:16:46.891124   61858 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:16:46.891205   61858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:16:46.902044   61858 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:16:47.035811   61858 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 20:16:47.035911   61858 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:16:47.185175   61858 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:16:47.185328   61858 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:16:47.185484   61858 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 20:16:47.387946   61858 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:16:47.432241   61858 out.go:235]   - Generating certificates and keys ...
	I0829 20:16:47.432366   61858 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:16:47.432480   61858 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:16:47.610782   61858 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0829 20:16:47.930124   61858 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0829 20:16:48.188217   61858 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0829 20:16:48.305126   61858 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0829 20:16:48.409595   61858 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0829 20:16:48.409870   61858 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-032002] and IPs [192.168.39.116 127.0.0.1 ::1]
	I0829 20:16:48.480999   61858 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0829 20:16:48.481210   61858 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-032002] and IPs [192.168.39.116 127.0.0.1 ::1]
	I0829 20:16:48.586294   61858 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0829 20:16:48.683901   61858 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0829 20:16:49.034324   61858 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0829 20:16:49.034717   61858 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:16:49.395020   61858 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:16:49.630662   61858 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:16:49.968435   61858 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:16:50.209620   61858 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:16:50.233391   61858 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:16:50.235129   61858 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:16:50.235218   61858 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:16:50.376623   61858 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:16:50.378573   61858 out.go:235]   - Booting up control plane ...
	I0829 20:16:50.378709   61858 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:16:50.392262   61858 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:16:50.393710   61858 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:16:50.394997   61858 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:16:50.400017   61858 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 20:17:30.395221   61858 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 20:17:30.395336   61858 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:17:30.395683   61858 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:17:35.395756   61858 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:17:35.396021   61858 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:17:45.395789   61858 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:17:45.396074   61858 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:18:05.395793   61858 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:18:05.396081   61858 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:18:45.397798   61858 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:18:45.398073   61858 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:18:45.398105   61858 kubeadm.go:310] 
	I0829 20:18:45.398163   61858 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 20:18:45.398214   61858 kubeadm.go:310] 		timed out waiting for the condition
	I0829 20:18:45.398223   61858 kubeadm.go:310] 
	I0829 20:18:45.398276   61858 kubeadm.go:310] 	This error is likely caused by:
	I0829 20:18:45.398330   61858 kubeadm.go:310] 		- The kubelet is not running
	I0829 20:18:45.398464   61858 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 20:18:45.398475   61858 kubeadm.go:310] 
	I0829 20:18:45.398639   61858 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 20:18:45.398699   61858 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 20:18:45.398748   61858 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 20:18:45.398755   61858 kubeadm.go:310] 
	I0829 20:18:45.398901   61858 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 20:18:45.399032   61858 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 20:18:45.399043   61858 kubeadm.go:310] 
	I0829 20:18:45.399193   61858 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 20:18:45.399321   61858 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 20:18:45.399431   61858 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 20:18:45.399531   61858 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 20:18:45.399543   61858 kubeadm.go:310] 
	I0829 20:18:45.399738   61858 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:18:45.399852   61858 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 20:18:45.399954   61858 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0829 20:18:45.400132   61858 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-032002] and IPs [192.168.39.116 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-032002] and IPs [192.168.39.116 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-032002] and IPs [192.168.39.116 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-032002] and IPs [192.168.39.116 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0829 20:18:45.400188   61858 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:18:45.881994   61858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:18:45.896695   61858 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:18:45.906551   61858 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:18:45.906578   61858 kubeadm.go:157] found existing configuration files:
	
	I0829 20:18:45.906634   61858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:18:45.916222   61858 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:18:45.916280   61858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:18:45.925644   61858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:18:45.934454   61858 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:18:45.934509   61858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:18:45.943755   61858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:18:45.952674   61858 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:18:45.952739   61858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:18:45.961932   61858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:18:45.971069   61858 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:18:45.971126   61858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:18:45.980943   61858 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:18:46.194600   61858 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:20:42.221138   61858 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 20:20:42.221231   61858 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0829 20:20:42.221328   61858 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 20:20:42.221404   61858 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:20:42.221496   61858 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:20:42.221617   61858 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:20:42.221734   61858 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 20:20:42.221808   61858 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:20:42.223727   61858 out.go:235]   - Generating certificates and keys ...
	I0829 20:20:42.223829   61858 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:20:42.223926   61858 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:20:42.224065   61858 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:20:42.224167   61858 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:20:42.224272   61858 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:20:42.224357   61858 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:20:42.224443   61858 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:20:42.224528   61858 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:20:42.224615   61858 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:20:42.224718   61858 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:20:42.224783   61858 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:20:42.224867   61858 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:20:42.224925   61858 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:20:42.224989   61858 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:20:42.225077   61858 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:20:42.225152   61858 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:20:42.225292   61858 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:20:42.225407   61858 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:20:42.225465   61858 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:20:42.225550   61858 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:20:42.227188   61858 out.go:235]   - Booting up control plane ...
	I0829 20:20:42.227293   61858 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:20:42.227384   61858 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:20:42.227486   61858 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:20:42.227628   61858 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:20:42.227857   61858 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 20:20:42.227935   61858 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 20:20:42.228022   61858 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:20:42.228264   61858 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:20:42.228359   61858 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:20:42.228599   61858 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:20:42.228692   61858 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:20:42.228940   61858 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:20:42.229027   61858 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:20:42.229275   61858 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:20:42.229369   61858 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:20:42.229609   61858 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:20:42.229620   61858 kubeadm.go:310] 
	I0829 20:20:42.229678   61858 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 20:20:42.229733   61858 kubeadm.go:310] 		timed out waiting for the condition
	I0829 20:20:42.229744   61858 kubeadm.go:310] 
	I0829 20:20:42.229789   61858 kubeadm.go:310] 	This error is likely caused by:
	I0829 20:20:42.229837   61858 kubeadm.go:310] 		- The kubelet is not running
	I0829 20:20:42.229981   61858 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 20:20:42.229991   61858 kubeadm.go:310] 
	I0829 20:20:42.230116   61858 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 20:20:42.230165   61858 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 20:20:42.230211   61858 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 20:20:42.230220   61858 kubeadm.go:310] 
	I0829 20:20:42.230356   61858 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 20:20:42.230470   61858 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 20:20:42.230480   61858 kubeadm.go:310] 
	I0829 20:20:42.230634   61858 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 20:20:42.230757   61858 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 20:20:42.230865   61858 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 20:20:42.230965   61858 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 20:20:42.231035   61858 kubeadm.go:394] duration metric: took 3m55.490976607s to StartCluster
	I0829 20:20:42.231088   61858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:20:42.231155   61858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:20:42.231222   61858 kubeadm.go:310] 
	I0829 20:20:42.282749   61858 cri.go:89] found id: ""
	I0829 20:20:42.282776   61858 logs.go:276] 0 containers: []
	W0829 20:20:42.282784   61858 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:20:42.282792   61858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:20:42.282857   61858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:20:42.318421   61858 cri.go:89] found id: ""
	I0829 20:20:42.318449   61858 logs.go:276] 0 containers: []
	W0829 20:20:42.318459   61858 logs.go:278] No container was found matching "etcd"
	I0829 20:20:42.318466   61858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:20:42.318526   61858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:20:42.360336   61858 cri.go:89] found id: ""
	I0829 20:20:42.360360   61858 logs.go:276] 0 containers: []
	W0829 20:20:42.360370   61858 logs.go:278] No container was found matching "coredns"
	I0829 20:20:42.360377   61858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:20:42.360436   61858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:20:42.397515   61858 cri.go:89] found id: ""
	I0829 20:20:42.397539   61858 logs.go:276] 0 containers: []
	W0829 20:20:42.397549   61858 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:20:42.397556   61858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:20:42.397625   61858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:20:42.436073   61858 cri.go:89] found id: ""
	I0829 20:20:42.436102   61858 logs.go:276] 0 containers: []
	W0829 20:20:42.436109   61858 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:20:42.436117   61858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:20:42.436177   61858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:20:42.470730   61858 cri.go:89] found id: ""
	I0829 20:20:42.470758   61858 logs.go:276] 0 containers: []
	W0829 20:20:42.470767   61858 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:20:42.470774   61858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:20:42.470824   61858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:20:42.504850   61858 cri.go:89] found id: ""
	I0829 20:20:42.504872   61858 logs.go:276] 0 containers: []
	W0829 20:20:42.504880   61858 logs.go:278] No container was found matching "kindnet"
	I0829 20:20:42.504889   61858 logs.go:123] Gathering logs for kubelet ...
	I0829 20:20:42.504900   61858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:20:42.559407   61858 logs.go:123] Gathering logs for dmesg ...
	I0829 20:20:42.559442   61858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:20:42.573555   61858 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:20:42.573581   61858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:20:42.699037   61858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:20:42.699061   61858 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:20:42.699072   61858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:20:42.817420   61858 logs.go:123] Gathering logs for container status ...
	I0829 20:20:42.817452   61858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0829 20:20:42.865006   61858 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0829 20:20:42.865078   61858 out.go:270] * 
	* 
	W0829 20:20:42.865135   61858 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 20:20:42.865160   61858 out.go:270] * 
	* 
	W0829 20:20:42.866199   61858 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 20:20:42.869847   61858 out.go:201] 
	W0829 20:20:42.871109   61858 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 20:20:42.871143   61858 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0829 20:20:42.871160   61858 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0829 20:20:42.872689   61858 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-032002 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-032002 -n old-k8s-version-032002
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-032002 -n old-k8s-version-032002: exit status 6 (313.047352ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 20:20:43.233626   66263 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-032002" does not appear in /home/jenkins/minikube-integration/19530-11185/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-032002" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (274.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (138.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-397724 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-397724 --alsologtostderr -v=3: exit status 82 (2m0.511945505s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-397724"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 20:18:32.769037   64895 out.go:345] Setting OutFile to fd 1 ...
	I0829 20:18:32.769175   64895 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:18:32.769186   64895 out.go:358] Setting ErrFile to fd 2...
	I0829 20:18:32.769192   64895 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:18:32.769390   64895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 20:18:32.769622   64895 out.go:352] Setting JSON to false
	I0829 20:18:32.769711   64895 mustload.go:65] Loading cluster: no-preload-397724
	I0829 20:18:32.770030   64895 config.go:182] Loaded profile config "no-preload-397724": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:18:32.770113   64895 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/config.json ...
	I0829 20:18:32.770290   64895 mustload.go:65] Loading cluster: no-preload-397724
	I0829 20:18:32.770415   64895 config.go:182] Loaded profile config "no-preload-397724": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:18:32.770447   64895 stop.go:39] StopHost: no-preload-397724
	I0829 20:18:32.770878   64895 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:18:32.770936   64895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:18:32.786452   64895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42085
	I0829 20:18:32.786914   64895 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:18:32.787551   64895 main.go:141] libmachine: Using API Version  1
	I0829 20:18:32.787596   64895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:18:32.788016   64895 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:18:32.791440   64895 out.go:177] * Stopping node "no-preload-397724"  ...
	I0829 20:18:32.792897   64895 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0829 20:18:32.792943   64895 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:18:32.793169   64895 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0829 20:18:32.793199   64895 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:18:32.796334   64895 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:18:32.796767   64895 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:16:54 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:18:32.796794   64895 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:18:32.796939   64895 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:18:32.797112   64895 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:18:32.797248   64895 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:18:32.797426   64895 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:18:32.898580   64895 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0829 20:18:32.964680   64895 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0829 20:18:33.025425   64895 main.go:141] libmachine: Stopping "no-preload-397724"...
	I0829 20:18:33.025457   64895 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:18:33.027449   64895 main.go:141] libmachine: (no-preload-397724) Calling .Stop
	I0829 20:18:33.031398   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 0/120
	I0829 20:18:34.033280   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 1/120
	I0829 20:18:35.035375   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 2/120
	I0829 20:18:36.037001   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 3/120
	I0829 20:18:37.038309   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 4/120
	I0829 20:18:38.039970   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 5/120
	I0829 20:18:39.042039   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 6/120
	I0829 20:18:40.043588   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 7/120
	I0829 20:18:41.045092   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 8/120
	I0829 20:18:42.046673   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 9/120
	I0829 20:18:43.049034   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 10/120
	I0829 20:18:44.050348   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 11/120
	I0829 20:18:45.051856   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 12/120
	I0829 20:18:46.053454   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 13/120
	I0829 20:18:47.055111   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 14/120
	I0829 20:18:48.056994   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 15/120
	I0829 20:18:49.058315   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 16/120
	I0829 20:18:50.059602   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 17/120
	I0829 20:18:51.060916   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 18/120
	I0829 20:18:52.062122   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 19/120
	I0829 20:18:53.063920   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 20/120
	I0829 20:18:54.066106   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 21/120
	I0829 20:18:55.067881   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 22/120
	I0829 20:18:56.069594   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 23/120
	I0829 20:18:57.071332   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 24/120
	I0829 20:18:58.073482   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 25/120
	I0829 20:18:59.074956   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 26/120
	I0829 20:19:00.077128   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 27/120
	I0829 20:19:01.078558   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 28/120
	I0829 20:19:02.079961   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 29/120
	I0829 20:19:03.081914   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 30/120
	I0829 20:19:04.083301   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 31/120
	I0829 20:19:05.085218   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 32/120
	I0829 20:19:06.086723   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 33/120
	I0829 20:19:07.088400   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 34/120
	I0829 20:19:08.090249   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 35/120
	I0829 20:19:09.091755   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 36/120
	I0829 20:19:10.093799   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 37/120
	I0829 20:19:11.095674   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 38/120
	I0829 20:19:12.097114   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 39/120
	I0829 20:19:13.099447   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 40/120
	I0829 20:19:14.101148   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 41/120
	I0829 20:19:15.102724   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 42/120
	I0829 20:19:16.105190   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 43/120
	I0829 20:19:17.106735   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 44/120
	I0829 20:19:18.108840   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 45/120
	I0829 20:19:19.110210   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 46/120
	I0829 20:19:20.111460   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 47/120
	I0829 20:19:21.112935   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 48/120
	I0829 20:19:22.114205   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 49/120
	I0829 20:19:23.116396   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 50/120
	I0829 20:19:24.117737   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 51/120
	I0829 20:19:25.119015   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 52/120
	I0829 20:19:26.120312   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 53/120
	I0829 20:19:27.121931   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 54/120
	I0829 20:19:28.124108   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 55/120
	I0829 20:19:29.125339   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 56/120
	I0829 20:19:30.126855   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 57/120
	I0829 20:19:31.129190   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 58/120
	I0829 20:19:32.130703   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 59/120
	I0829 20:19:33.132990   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 60/120
	I0829 20:19:34.134745   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 61/120
	I0829 20:19:35.136496   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 62/120
	I0829 20:19:36.138755   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 63/120
	I0829 20:19:37.140125   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 64/120
	I0829 20:19:38.142320   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 65/120
	I0829 20:19:39.143689   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 66/120
	I0829 20:19:40.146084   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 67/120
	I0829 20:19:41.147570   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 68/120
	I0829 20:19:42.148805   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 69/120
	I0829 20:19:43.151096   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 70/120
	I0829 20:19:44.152478   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 71/120
	I0829 20:19:45.153679   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 72/120
	I0829 20:19:46.155226   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 73/120
	I0829 20:19:47.157037   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 74/120
	I0829 20:19:48.158592   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 75/120
	I0829 20:19:49.160170   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 76/120
	I0829 20:19:50.161688   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 77/120
	I0829 20:19:51.163230   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 78/120
	I0829 20:19:52.164697   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 79/120
	I0829 20:19:53.166869   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 80/120
	I0829 20:19:54.169195   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 81/120
	I0829 20:19:55.170762   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 82/120
	I0829 20:19:56.172290   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 83/120
	I0829 20:19:57.173690   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 84/120
	I0829 20:19:58.175767   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 85/120
	I0829 20:19:59.177089   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 86/120
	I0829 20:20:00.178757   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 87/120
	I0829 20:20:01.180228   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 88/120
	I0829 20:20:02.181692   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 89/120
	I0829 20:20:03.184036   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 90/120
	I0829 20:20:04.185495   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 91/120
	I0829 20:20:05.186943   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 92/120
	I0829 20:20:06.188549   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 93/120
	I0829 20:20:07.190099   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 94/120
	I0829 20:20:08.192274   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 95/120
	I0829 20:20:09.194023   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 96/120
	I0829 20:20:10.195453   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 97/120
	I0829 20:20:11.196984   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 98/120
	I0829 20:20:12.198401   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 99/120
	I0829 20:20:13.200871   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 100/120
	I0829 20:20:14.202484   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 101/120
	I0829 20:20:15.203816   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 102/120
	I0829 20:20:16.205152   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 103/120
	I0829 20:20:17.206407   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 104/120
	I0829 20:20:18.207798   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 105/120
	I0829 20:20:19.209432   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 106/120
	I0829 20:20:20.210830   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 107/120
	I0829 20:20:21.212207   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 108/120
	I0829 20:20:22.213669   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 109/120
	I0829 20:20:23.215969   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 110/120
	I0829 20:20:24.217437   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 111/120
	I0829 20:20:25.219110   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 112/120
	I0829 20:20:26.220425   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 113/120
	I0829 20:20:27.221958   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 114/120
	I0829 20:20:28.224046   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 115/120
	I0829 20:20:29.225594   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 116/120
	I0829 20:20:30.227189   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 117/120
	I0829 20:20:31.228856   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 118/120
	I0829 20:20:32.230388   64895 main.go:141] libmachine: (no-preload-397724) Waiting for machine to stop 119/120
	I0829 20:20:33.231619   64895 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0829 20:20:33.231688   64895 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0829 20:20:33.233290   64895 out.go:201] 
	W0829 20:20:33.234514   64895 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0829 20:20:33.234575   64895 out.go:270] * 
	* 
	W0829 20:20:33.237677   64895 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 20:20:33.239063   64895 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-397724 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-397724 -n no-preload-397724
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-397724 -n no-preload-397724: exit status 3 (18.442640824s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 20:20:51.682890   65838 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.214:22: connect: no route to host
	E0829 20:20:51.682915   65838 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.214:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-397724" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (138.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-388383 --alsologtostderr -v=3
E0829 20:18:45.975450   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-388383 --alsologtostderr -v=3: exit status 82 (2m0.494356474s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-388383"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 20:18:42.918977   65072 out.go:345] Setting OutFile to fd 1 ...
	I0829 20:18:42.919219   65072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:18:42.919229   65072 out.go:358] Setting ErrFile to fd 2...
	I0829 20:18:42.919234   65072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:18:42.919431   65072 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 20:18:42.919728   65072 out.go:352] Setting JSON to false
	I0829 20:18:42.919808   65072 mustload.go:65] Loading cluster: embed-certs-388383
	I0829 20:18:42.920117   65072 config.go:182] Loaded profile config "embed-certs-388383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:18:42.920195   65072 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/config.json ...
	I0829 20:18:42.920369   65072 mustload.go:65] Loading cluster: embed-certs-388383
	I0829 20:18:42.920496   65072 config.go:182] Loaded profile config "embed-certs-388383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:18:42.920528   65072 stop.go:39] StopHost: embed-certs-388383
	I0829 20:18:42.920906   65072 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:18:42.920953   65072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:18:42.935597   65072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0829 20:18:42.936035   65072 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:18:42.936611   65072 main.go:141] libmachine: Using API Version  1
	I0829 20:18:42.936635   65072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:18:42.936950   65072 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:18:42.939218   65072 out.go:177] * Stopping node "embed-certs-388383"  ...
	I0829 20:18:42.940562   65072 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0829 20:18:42.940598   65072 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:18:42.940822   65072 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0829 20:18:42.940853   65072 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:18:42.943578   65072 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:18:42.943935   65072 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:17:23 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:18:42.943966   65072 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:18:42.944097   65072 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:18:42.944244   65072 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:18:42.944394   65072 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:18:42.944485   65072 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:18:43.052190   65072 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0829 20:18:43.112905   65072 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0829 20:18:43.172215   65072 main.go:141] libmachine: Stopping "embed-certs-388383"...
	I0829 20:18:43.172241   65072 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:18:43.173889   65072 main.go:141] libmachine: (embed-certs-388383) Calling .Stop
	I0829 20:18:43.177346   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 0/120
	I0829 20:18:44.178837   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 1/120
	I0829 20:18:45.180167   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 2/120
	I0829 20:18:46.181666   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 3/120
	I0829 20:18:47.183213   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 4/120
	I0829 20:18:48.184991   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 5/120
	I0829 20:18:49.186349   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 6/120
	I0829 20:18:50.187824   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 7/120
	I0829 20:18:51.188990   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 8/120
	I0829 20:18:52.190452   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 9/120
	I0829 20:18:53.192647   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 10/120
	I0829 20:18:54.193860   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 11/120
	I0829 20:18:55.195517   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 12/120
	I0829 20:18:56.197179   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 13/120
	I0829 20:18:57.198756   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 14/120
	I0829 20:18:58.200762   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 15/120
	I0829 20:18:59.202262   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 16/120
	I0829 20:19:00.203659   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 17/120
	I0829 20:19:01.205106   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 18/120
	I0829 20:19:02.206562   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 19/120
	I0829 20:19:03.208546   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 20/120
	I0829 20:19:04.209808   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 21/120
	I0829 20:19:05.211309   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 22/120
	I0829 20:19:06.213073   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 23/120
	I0829 20:19:07.214445   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 24/120
	I0829 20:19:08.216304   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 25/120
	I0829 20:19:09.217714   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 26/120
	I0829 20:19:10.219134   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 27/120
	I0829 20:19:11.221124   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 28/120
	I0829 20:19:12.222431   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 29/120
	I0829 20:19:13.224598   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 30/120
	I0829 20:19:14.226423   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 31/120
	I0829 20:19:15.228000   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 32/120
	I0829 20:19:16.229299   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 33/120
	I0829 20:19:17.230796   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 34/120
	I0829 20:19:18.232943   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 35/120
	I0829 20:19:19.234381   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 36/120
	I0829 20:19:20.235559   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 37/120
	I0829 20:19:21.236879   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 38/120
	I0829 20:19:22.238281   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 39/120
	I0829 20:19:23.240411   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 40/120
	I0829 20:19:24.241648   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 41/120
	I0829 20:19:25.243054   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 42/120
	I0829 20:19:26.244364   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 43/120
	I0829 20:19:27.245766   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 44/120
	I0829 20:19:28.247659   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 45/120
	I0829 20:19:29.249150   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 46/120
	I0829 20:19:30.251139   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 47/120
	I0829 20:19:31.253250   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 48/120
	I0829 20:19:32.254671   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 49/120
	I0829 20:19:33.256892   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 50/120
	I0829 20:19:34.258383   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 51/120
	I0829 20:19:35.260439   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 52/120
	I0829 20:19:36.262165   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 53/120
	I0829 20:19:37.263545   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 54/120
	I0829 20:19:38.265524   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 55/120
	I0829 20:19:39.266921   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 56/120
	I0829 20:19:40.268503   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 57/120
	I0829 20:19:41.270010   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 58/120
	I0829 20:19:42.271246   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 59/120
	I0829 20:19:43.273259   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 60/120
	I0829 20:19:44.274625   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 61/120
	I0829 20:19:45.275878   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 62/120
	I0829 20:19:46.277368   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 63/120
	I0829 20:19:47.278683   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 64/120
	I0829 20:19:48.280902   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 65/120
	I0829 20:19:49.282349   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 66/120
	I0829 20:19:50.284014   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 67/120
	I0829 20:19:51.285524   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 68/120
	I0829 20:19:52.286965   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 69/120
	I0829 20:19:53.289062   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 70/120
	I0829 20:19:54.290541   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 71/120
	I0829 20:19:55.292122   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 72/120
	I0829 20:19:56.293420   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 73/120
	I0829 20:19:57.294608   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 74/120
	I0829 20:19:58.296677   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 75/120
	I0829 20:19:59.298134   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 76/120
	I0829 20:20:00.299577   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 77/120
	I0829 20:20:01.301028   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 78/120
	I0829 20:20:02.302463   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 79/120
	I0829 20:20:03.303788   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 80/120
	I0829 20:20:04.305210   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 81/120
	I0829 20:20:05.306613   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 82/120
	I0829 20:20:06.308052   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 83/120
	I0829 20:20:07.309331   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 84/120
	I0829 20:20:08.311357   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 85/120
	I0829 20:20:09.312636   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 86/120
	I0829 20:20:10.313893   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 87/120
	I0829 20:20:11.315198   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 88/120
	I0829 20:20:12.316545   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 89/120
	I0829 20:20:13.318769   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 90/120
	I0829 20:20:14.320936   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 91/120
	I0829 20:20:15.322388   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 92/120
	I0829 20:20:16.323704   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 93/120
	I0829 20:20:17.325064   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 94/120
	I0829 20:20:18.327251   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 95/120
	I0829 20:20:19.328581   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 96/120
	I0829 20:20:20.329922   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 97/120
	I0829 20:20:21.331381   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 98/120
	I0829 20:20:22.332708   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 99/120
	I0829 20:20:23.335191   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 100/120
	I0829 20:20:24.336492   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 101/120
	I0829 20:20:25.338045   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 102/120
	I0829 20:20:26.339564   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 103/120
	I0829 20:20:27.340968   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 104/120
	I0829 20:20:28.342890   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 105/120
	I0829 20:20:29.344267   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 106/120
	I0829 20:20:30.345804   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 107/120
	I0829 20:20:31.347238   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 108/120
	I0829 20:20:32.348565   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 109/120
	I0829 20:20:33.350356   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 110/120
	I0829 20:20:34.351702   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 111/120
	I0829 20:20:35.353275   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 112/120
	I0829 20:20:36.354522   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 113/120
	I0829 20:20:37.355436   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 114/120
	I0829 20:20:38.357021   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 115/120
	I0829 20:20:39.358547   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 116/120
	I0829 20:20:40.359955   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 117/120
	I0829 20:20:41.361389   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 118/120
	I0829 20:20:42.363268   65072 main.go:141] libmachine: (embed-certs-388383) Waiting for machine to stop 119/120
	I0829 20:20:43.364092   65072 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0829 20:20:43.364153   65072 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0829 20:20:43.366200   65072 out.go:201] 
	W0829 20:20:43.367454   65072 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0829 20:20:43.367470   65072 out.go:270] * 
	* 
	W0829 20:20:43.370827   65072 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 20:20:43.372360   65072 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-388383 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-388383 -n embed-certs-388383
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-388383 -n embed-certs-388383: exit status 3 (18.548199412s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 20:21:01.922861   66353 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.202:22: connect: no route to host
	E0829 20:21:01.922881   66353 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.202:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-388383" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-032002 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-032002 create -f testdata/busybox.yaml: exit status 1 (43.494885ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-032002" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-032002 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-032002 -n old-k8s-version-032002
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-032002 -n old-k8s-version-032002: exit status 6 (234.443107ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 20:20:43.508776   66312 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-032002" does not appear in /home/jenkins/minikube-integration/19530-11185/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-032002" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-032002 -n old-k8s-version-032002
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-032002 -n old-k8s-version-032002: exit status 6 (242.559106ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 20:20:43.752440   66455 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-032002" does not appear in /home/jenkins/minikube-integration/19530-11185/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-032002" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (110.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-032002 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-032002 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m50.282663871s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-032002 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-032002 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-032002 describe deploy/metrics-server -n kube-system: exit status 1 (44.115128ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-032002" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-032002 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-032002 -n old-k8s-version-032002
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-032002 -n old-k8s-version-032002: exit status 6 (219.978864ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 20:22:34.301177   67472 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-032002" does not appear in /home/jenkins/minikube-integration/19530-11185/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-032002" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (110.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-397724 -n no-preload-397724
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-397724 -n no-preload-397724: exit status 3 (3.168184898s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 20:20:54.850820   66654 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.214:22: connect: no route to host
	E0829 20:20:54.850843   66654 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.214:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-397724 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-397724 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.344135477s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.214:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-397724 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-397724 -n no-preload-397724
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-397724 -n no-preload-397724: exit status 3 (2.87103092s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 20:21:04.066824   66781 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.214:22: connect: no route to host
	E0829 20:21:04.066841   66781 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.214:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-397724" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-388383 -n embed-certs-388383
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-388383 -n embed-certs-388383: exit status 3 (3.167823357s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 20:21:05.090909   66811 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.202:22: connect: no route to host
	E0829 20:21:05.090929   66811 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.202:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-388383 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-388383 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153133746s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.202:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-388383 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-388383 -n embed-certs-388383
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-388383 -n embed-certs-388383: exit status 3 (3.062445753s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 20:21:14.306878   66960 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.202:22: connect: no route to host
	E0829 20:21:14.306900   66960 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.202:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-388383" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-145096 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-145096 --alsologtostderr -v=3: exit status 82 (2m0.509246449s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-145096"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 20:21:45.087434   67258 out.go:345] Setting OutFile to fd 1 ...
	I0829 20:21:45.087695   67258 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:21:45.087704   67258 out.go:358] Setting ErrFile to fd 2...
	I0829 20:21:45.087707   67258 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:21:45.087864   67258 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 20:21:45.088073   67258 out.go:352] Setting JSON to false
	I0829 20:21:45.088143   67258 mustload.go:65] Loading cluster: default-k8s-diff-port-145096
	I0829 20:21:45.088460   67258 config.go:182] Loaded profile config "default-k8s-diff-port-145096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:21:45.088525   67258 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/config.json ...
	I0829 20:21:45.088689   67258 mustload.go:65] Loading cluster: default-k8s-diff-port-145096
	I0829 20:21:45.088813   67258 config.go:182] Loaded profile config "default-k8s-diff-port-145096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:21:45.088840   67258 stop.go:39] StopHost: default-k8s-diff-port-145096
	I0829 20:21:45.089203   67258 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:21:45.089242   67258 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:21:45.104211   67258 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37669
	I0829 20:21:45.104660   67258 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:21:45.105247   67258 main.go:141] libmachine: Using API Version  1
	I0829 20:21:45.105268   67258 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:21:45.105641   67258 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:21:45.108702   67258 out.go:177] * Stopping node "default-k8s-diff-port-145096"  ...
	I0829 20:21:45.110033   67258 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0829 20:21:45.110056   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:21:45.110299   67258 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0829 20:21:45.110337   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:21:45.113460   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:21:45.113885   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:21:45.113907   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:21:45.114060   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:21:45.114226   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:21:45.114400   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:21:45.114582   67258 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:21:45.220347   67258 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0829 20:21:45.282831   67258 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0829 20:21:45.346646   67258 main.go:141] libmachine: Stopping "default-k8s-diff-port-145096"...
	I0829 20:21:45.346708   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:21:45.348243   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Stop
	I0829 20:21:45.351596   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 0/120
	I0829 20:21:46.353217   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 1/120
	I0829 20:21:47.354634   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 2/120
	I0829 20:21:48.356089   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 3/120
	I0829 20:21:49.357677   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 4/120
	I0829 20:21:50.359881   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 5/120
	I0829 20:21:51.361329   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 6/120
	I0829 20:21:52.362906   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 7/120
	I0829 20:21:53.364311   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 8/120
	I0829 20:21:54.365725   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 9/120
	I0829 20:21:55.367096   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 10/120
	I0829 20:21:56.368662   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 11/120
	I0829 20:21:57.370142   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 12/120
	I0829 20:21:58.371740   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 13/120
	I0829 20:21:59.373216   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 14/120
	I0829 20:22:00.375334   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 15/120
	I0829 20:22:01.376913   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 16/120
	I0829 20:22:02.378300   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 17/120
	I0829 20:22:03.379856   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 18/120
	I0829 20:22:04.381254   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 19/120
	I0829 20:22:05.383662   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 20/120
	I0829 20:22:06.385060   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 21/120
	I0829 20:22:07.386454   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 22/120
	I0829 20:22:08.387883   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 23/120
	I0829 20:22:09.389295   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 24/120
	I0829 20:22:10.391450   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 25/120
	I0829 20:22:11.392731   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 26/120
	I0829 20:22:12.394196   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 27/120
	I0829 20:22:13.395510   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 28/120
	I0829 20:22:14.396986   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 29/120
	I0829 20:22:15.399291   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 30/120
	I0829 20:22:16.400751   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 31/120
	I0829 20:22:17.402449   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 32/120
	I0829 20:22:18.403816   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 33/120
	I0829 20:22:19.405300   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 34/120
	I0829 20:22:20.407438   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 35/120
	I0829 20:22:21.408795   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 36/120
	I0829 20:22:22.410335   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 37/120
	I0829 20:22:23.411648   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 38/120
	I0829 20:22:24.413124   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 39/120
	I0829 20:22:25.415554   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 40/120
	I0829 20:22:26.417132   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 41/120
	I0829 20:22:27.418863   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 42/120
	I0829 20:22:28.420213   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 43/120
	I0829 20:22:29.421666   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 44/120
	I0829 20:22:30.424106   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 45/120
	I0829 20:22:31.425555   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 46/120
	I0829 20:22:32.427085   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 47/120
	I0829 20:22:33.428623   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 48/120
	I0829 20:22:34.429949   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 49/120
	I0829 20:22:35.431505   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 50/120
	I0829 20:22:36.432960   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 51/120
	I0829 20:22:37.434306   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 52/120
	I0829 20:22:38.435851   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 53/120
	I0829 20:22:39.437382   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 54/120
	I0829 20:22:40.439726   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 55/120
	I0829 20:22:41.441168   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 56/120
	I0829 20:22:42.442849   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 57/120
	I0829 20:22:43.444634   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 58/120
	I0829 20:22:44.446086   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 59/120
	I0829 20:22:45.448409   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 60/120
	I0829 20:22:46.449874   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 61/120
	I0829 20:22:47.451417   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 62/120
	I0829 20:22:48.453028   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 63/120
	I0829 20:22:49.454358   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 64/120
	I0829 20:22:50.456407   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 65/120
	I0829 20:22:51.457845   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 66/120
	I0829 20:22:52.459654   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 67/120
	I0829 20:22:53.461112   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 68/120
	I0829 20:22:54.462516   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 69/120
	I0829 20:22:55.464924   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 70/120
	I0829 20:22:56.466320   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 71/120
	I0829 20:22:57.467775   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 72/120
	I0829 20:22:58.469148   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 73/120
	I0829 20:22:59.470524   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 74/120
	I0829 20:23:00.472852   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 75/120
	I0829 20:23:01.474192   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 76/120
	I0829 20:23:02.475749   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 77/120
	I0829 20:23:03.477198   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 78/120
	I0829 20:23:04.478792   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 79/120
	I0829 20:23:05.481145   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 80/120
	I0829 20:23:06.482599   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 81/120
	I0829 20:23:07.483961   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 82/120
	I0829 20:23:08.485468   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 83/120
	I0829 20:23:09.487156   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 84/120
	I0829 20:23:10.489264   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 85/120
	I0829 20:23:11.490690   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 86/120
	I0829 20:23:12.492123   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 87/120
	I0829 20:23:13.493595   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 88/120
	I0829 20:23:14.494975   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 89/120
	I0829 20:23:15.497299   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 90/120
	I0829 20:23:16.498811   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 91/120
	I0829 20:23:17.500266   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 92/120
	I0829 20:23:18.501724   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 93/120
	I0829 20:23:19.503217   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 94/120
	I0829 20:23:20.505434   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 95/120
	I0829 20:23:21.506849   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 96/120
	I0829 20:23:22.508279   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 97/120
	I0829 20:23:23.509755   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 98/120
	I0829 20:23:24.511330   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 99/120
	I0829 20:23:25.513553   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 100/120
	I0829 20:23:26.515046   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 101/120
	I0829 20:23:27.516448   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 102/120
	I0829 20:23:28.517966   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 103/120
	I0829 20:23:29.519404   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 104/120
	I0829 20:23:30.521527   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 105/120
	I0829 20:23:31.523272   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 106/120
	I0829 20:23:32.524808   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 107/120
	I0829 20:23:33.526285   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 108/120
	I0829 20:23:34.527759   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 109/120
	I0829 20:23:35.529242   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 110/120
	I0829 20:23:36.530624   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 111/120
	I0829 20:23:37.531955   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 112/120
	I0829 20:23:38.533362   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 113/120
	I0829 20:23:39.534741   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 114/120
	I0829 20:23:40.536782   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 115/120
	I0829 20:23:41.538255   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 116/120
	I0829 20:23:42.539652   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 117/120
	I0829 20:23:43.541035   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 118/120
	I0829 20:23:44.542440   67258 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for machine to stop 119/120
	I0829 20:23:45.543479   67258 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0829 20:23:45.543538   67258 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0829 20:23:45.545351   67258 out.go:201] 
	W0829 20:23:45.546460   67258 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0829 20:23:45.546478   67258 out.go:270] * 
	* 
	W0829 20:23:45.549505   67258 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 20:23:45.550801   67258 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-145096 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-145096 -n default-k8s-diff-port-145096
E0829 20:23:45.974820   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-145096 -n default-k8s-diff-port-145096: exit status 3 (18.641944252s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 20:24:04.194770   67878 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.140:22: connect: no route to host
	E0829 20:24:04.194789   67878 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.140:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-145096" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (712.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-032002 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-032002 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m48.630952808s)

                                                
                                                
-- stdout --
	* [old-k8s-version-032002] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19530
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-032002" primary control-plane node in "old-k8s-version-032002" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-032002" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 20:22:38.816897   67607 out.go:345] Setting OutFile to fd 1 ...
	I0829 20:22:38.817136   67607 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:22:38.817144   67607 out.go:358] Setting ErrFile to fd 2...
	I0829 20:22:38.817149   67607 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:22:38.817306   67607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 20:22:38.817815   67607 out.go:352] Setting JSON to false
	I0829 20:22:38.818799   67607 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7506,"bootTime":1724955453,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 20:22:38.818851   67607 start.go:139] virtualization: kvm guest
	I0829 20:22:38.820913   67607 out.go:177] * [old-k8s-version-032002] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 20:22:38.822193   67607 out.go:177]   - MINIKUBE_LOCATION=19530
	I0829 20:22:38.822202   67607 notify.go:220] Checking for updates...
	I0829 20:22:38.824889   67607 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 20:22:38.826035   67607 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:22:38.827228   67607 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 20:22:38.828371   67607 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 20:22:38.829465   67607 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 20:22:38.830916   67607 config.go:182] Loaded profile config "old-k8s-version-032002": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0829 20:22:38.831305   67607 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:22:38.831368   67607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:22:38.846064   67607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33547
	I0829 20:22:38.846424   67607 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:22:38.846932   67607 main.go:141] libmachine: Using API Version  1
	I0829 20:22:38.846951   67607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:22:38.847291   67607 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:22:38.847480   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:22:38.849362   67607 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0829 20:22:38.850492   67607 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 20:22:38.851065   67607 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:22:38.851110   67607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:22:38.865782   67607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43185
	I0829 20:22:38.866183   67607 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:22:38.866674   67607 main.go:141] libmachine: Using API Version  1
	I0829 20:22:38.866694   67607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:22:38.867031   67607 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:22:38.867230   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:22:38.901814   67607 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 20:22:38.903057   67607 start.go:297] selected driver: kvm2
	I0829 20:22:38.903068   67607 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-032002 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:22:38.903174   67607 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 20:22:38.903814   67607 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 20:22:38.903879   67607 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19530-11185/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 20:22:38.918843   67607 install.go:137] /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0829 20:22:38.919208   67607 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:22:38.919300   67607 cni.go:84] Creating CNI manager for ""
	I0829 20:22:38.919313   67607 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:22:38.919351   67607 start.go:340] cluster config:
	{Name:old-k8s-version-032002 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-032002 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:22:38.919470   67607 iso.go:125] acquiring lock: {Name:mk1c9d3ac7f423dd4657884e37bdf4359f6328d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 20:22:38.921350   67607 out.go:177] * Starting "old-k8s-version-032002" primary control-plane node in "old-k8s-version-032002" cluster
	I0829 20:22:38.922561   67607 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 20:22:38.922605   67607 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0829 20:22:38.922616   67607 cache.go:56] Caching tarball of preloaded images
	I0829 20:22:38.922694   67607 preload.go:172] Found /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 20:22:38.922705   67607 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0829 20:22:38.922795   67607 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/config.json ...
	I0829 20:22:38.922963   67607 start.go:360] acquireMachinesLock for old-k8s-version-032002: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 20:26:02.603329   67607 start.go:364] duration metric: took 3m23.680319578s to acquireMachinesLock for "old-k8s-version-032002"
	I0829 20:26:02.603393   67607 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:26:02.603404   67607 fix.go:54] fixHost starting: 
	I0829 20:26:02.603837   67607 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:02.603884   67607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:02.621398   67607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35977
	I0829 20:26:02.621840   67607 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:02.622425   67607 main.go:141] libmachine: Using API Version  1
	I0829 20:26:02.622460   67607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:02.622810   67607 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:02.623040   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:02.623201   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetState
	I0829 20:26:02.624854   67607 fix.go:112] recreateIfNeeded on old-k8s-version-032002: state=Stopped err=<nil>
	I0829 20:26:02.624880   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	W0829 20:26:02.625020   67607 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:26:02.627161   67607 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-032002" ...
	I0829 20:26:02.628419   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .Start
	I0829 20:26:02.628578   67607 main.go:141] libmachine: (old-k8s-version-032002) Ensuring networks are active...
	I0829 20:26:02.629339   67607 main.go:141] libmachine: (old-k8s-version-032002) Ensuring network default is active
	I0829 20:26:02.629732   67607 main.go:141] libmachine: (old-k8s-version-032002) Ensuring network mk-old-k8s-version-032002 is active
	I0829 20:26:02.630188   67607 main.go:141] libmachine: (old-k8s-version-032002) Getting domain xml...
	I0829 20:26:02.630924   67607 main.go:141] libmachine: (old-k8s-version-032002) Creating domain...
	I0829 20:26:03.867691   67607 main.go:141] libmachine: (old-k8s-version-032002) Waiting to get IP...
	I0829 20:26:03.868798   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:03.869246   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:03.869318   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:03.869235   68552 retry.go:31] will retry after 220.928648ms: waiting for machine to come up
	I0829 20:26:04.091675   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:04.092057   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:04.092084   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:04.092020   68552 retry.go:31] will retry after 352.781755ms: waiting for machine to come up
	I0829 20:26:04.446766   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:04.447277   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:04.447301   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:04.447224   68552 retry.go:31] will retry after 480.96031ms: waiting for machine to come up
	I0829 20:26:04.929561   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:04.930149   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:04.930181   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:04.930051   68552 retry.go:31] will retry after 415.057247ms: waiting for machine to come up
	I0829 20:26:05.346757   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:05.347224   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:05.347258   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:05.347196   68552 retry.go:31] will retry after 609.958508ms: waiting for machine to come up
	I0829 20:26:05.959227   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:05.959774   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:05.959825   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:05.959702   68552 retry.go:31] will retry after 680.801337ms: waiting for machine to come up
	I0829 20:26:06.642811   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:06.643312   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:06.643343   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:06.643269   68552 retry.go:31] will retry after 995.561322ms: waiting for machine to come up
	I0829 20:26:07.640147   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:07.640617   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:07.640652   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:07.640588   68552 retry.go:31] will retry after 1.22436043s: waiting for machine to come up
	I0829 20:26:08.866518   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:08.866954   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:08.866985   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:08.866896   68552 retry.go:31] will retry after 1.707701085s: waiting for machine to come up
	I0829 20:26:10.576676   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:10.577094   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:10.577124   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:10.577047   68552 retry.go:31] will retry after 1.496799212s: waiting for machine to come up
	I0829 20:26:12.075964   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:12.076412   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:12.076451   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:12.076377   68552 retry.go:31] will retry after 2.246779697s: waiting for machine to come up
	I0829 20:26:14.324452   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:14.324770   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:14.324808   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:14.324748   68552 retry.go:31] will retry after 3.172592587s: waiting for machine to come up
	I0829 20:26:17.500203   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:17.500540   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:17.500573   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:17.500485   68552 retry.go:31] will retry after 2.81386002s: waiting for machine to come up
	I0829 20:26:20.317138   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.317672   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has current primary IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.317700   67607 main.go:141] libmachine: (old-k8s-version-032002) Found IP for machine: 192.168.39.116
	I0829 20:26:20.317716   67607 main.go:141] libmachine: (old-k8s-version-032002) Reserving static IP address...
	I0829 20:26:20.318143   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "old-k8s-version-032002", mac: "52:54:00:a8:ca:96", ip: "192.168.39.116"} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.318169   67607 main.go:141] libmachine: (old-k8s-version-032002) Reserved static IP address: 192.168.39.116
	I0829 20:26:20.318189   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | skip adding static IP to network mk-old-k8s-version-032002 - found existing host DHCP lease matching {name: "old-k8s-version-032002", mac: "52:54:00:a8:ca:96", ip: "192.168.39.116"}
	I0829 20:26:20.318208   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | Getting to WaitForSSH function...
	I0829 20:26:20.318217   67607 main.go:141] libmachine: (old-k8s-version-032002) Waiting for SSH to be available...
	I0829 20:26:20.320598   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.320961   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.320989   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.321082   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | Using SSH client type: external
	I0829 20:26:20.321121   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa (-rw-------)
	I0829 20:26:20.321156   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:20.321171   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | About to run SSH command:
	I0829 20:26:20.321185   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | exit 0
	I0829 20:26:20.446805   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:20.447204   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetConfigRaw
	I0829 20:26:20.447944   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:20.450726   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.451120   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.451160   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.451464   67607 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/config.json ...
	I0829 20:26:20.451670   67607 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:20.451690   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:20.451886   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.454120   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.454496   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.454566   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.454648   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.454808   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.454975   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.455123   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.455282   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:20.455520   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:20.455533   67607 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:20.555074   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:20.555100   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetMachineName
	I0829 20:26:20.555331   67607 buildroot.go:166] provisioning hostname "old-k8s-version-032002"
	I0829 20:26:20.555353   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetMachineName
	I0829 20:26:20.555540   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.558576   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.559058   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.559086   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.559273   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.559490   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.559661   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.559834   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.560026   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:20.560189   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:20.560201   67607 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-032002 && echo "old-k8s-version-032002" | sudo tee /etc/hostname
	I0829 20:26:20.675352   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-032002
	
	I0829 20:26:20.675400   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.678472   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.678908   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.678944   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.679139   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.679341   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.679533   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.679710   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.679884   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:20.680090   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:20.680108   67607 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-032002' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-032002/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-032002' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:20.789673   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:20.789713   67607 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:20.789744   67607 buildroot.go:174] setting up certificates
	I0829 20:26:20.789753   67607 provision.go:84] configureAuth start
	I0829 20:26:20.789761   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetMachineName
	I0829 20:26:20.790067   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:20.792822   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.793152   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.793173   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.793338   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.795624   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.795948   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.795974   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.796080   67607 provision.go:143] copyHostCerts
	I0829 20:26:20.796148   67607 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:20.796168   67607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:20.796236   67607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:20.796344   67607 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:20.796355   67607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:20.796387   67607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:20.796467   67607 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:20.796476   67607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:20.796503   67607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:20.796573   67607 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-032002 san=[127.0.0.1 192.168.39.116 localhost minikube old-k8s-version-032002]
	I0829 20:26:20.906382   67607 provision.go:177] copyRemoteCerts
	I0829 20:26:20.906436   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:20.906466   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.909180   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.909488   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.909519   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.909666   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.909831   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.909963   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.910062   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:20.989017   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:21.018571   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0829 20:26:21.043015   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 20:26:21.067288   67607 provision.go:87] duration metric: took 277.522292ms to configureAuth
	I0829 20:26:21.067322   67607 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:21.067527   67607 config.go:182] Loaded profile config "old-k8s-version-032002": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0829 20:26:21.067607   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.070264   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.070642   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.070679   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.070881   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.071088   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.071288   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.071465   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.071661   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:21.071886   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:21.071923   67607 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:21.290979   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:21.291003   67607 machine.go:96] duration metric: took 839.319831ms to provisionDockerMachine
	I0829 20:26:21.291014   67607 start.go:293] postStartSetup for "old-k8s-version-032002" (driver="kvm2")
	I0829 20:26:21.291026   67607 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:21.291046   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.291342   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:21.291366   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.293946   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.294245   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.294273   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.294464   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.294686   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.294840   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.294964   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:21.373592   67607 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:21.377797   67607 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:21.377826   67607 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:21.377892   67607 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:21.377966   67607 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:21.378054   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:21.387886   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:21.413456   67607 start.go:296] duration metric: took 122.429334ms for postStartSetup
	I0829 20:26:21.413497   67607 fix.go:56] duration metric: took 18.810093949s for fixHost
	I0829 20:26:21.413522   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.416095   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.416391   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.416418   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.416594   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.416803   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.416970   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.417115   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.417272   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:21.417474   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:21.417489   67607 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:21.515167   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963181.486447470
	
	I0829 20:26:21.515190   67607 fix.go:216] guest clock: 1724963181.486447470
	I0829 20:26:21.515200   67607 fix.go:229] Guest: 2024-08-29 20:26:21.48644747 +0000 UTC Remote: 2024-08-29 20:26:21.413502498 +0000 UTC m=+222.629982255 (delta=72.944972ms)
	I0829 20:26:21.515225   67607 fix.go:200] guest clock delta is within tolerance: 72.944972ms
	I0829 20:26:21.515232   67607 start.go:83] releasing machines lock for "old-k8s-version-032002", held for 18.911866017s
	I0829 20:26:21.515278   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.515596   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:21.518247   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.518682   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.518710   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.518835   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.519413   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.519589   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.519680   67607 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:21.519736   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.519843   67607 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:21.519869   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.522261   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.522561   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.522614   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.522643   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.522763   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.522919   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.523044   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.523071   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.523073   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.523241   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.523240   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:21.523413   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.523560   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.523712   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:21.599524   67607 ssh_runner.go:195] Run: systemctl --version
	I0829 20:26:21.629122   67607 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:26:21.778437   67607 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:26:21.784642   67607 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:26:21.784714   67607 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:26:21.802019   67607 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:26:21.802043   67607 start.go:495] detecting cgroup driver to use...
	I0829 20:26:21.802100   67607 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:26:21.817407   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:26:21.831514   67607 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:26:21.831578   67607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:26:21.845224   67607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:26:21.858522   67607 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:26:21.972769   67607 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:26:22.115154   67607 docker.go:233] disabling docker service ...
	I0829 20:26:22.115240   67607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:26:22.130015   67607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:26:22.143186   67607 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:26:22.294113   67607 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:26:22.432373   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:26:22.446427   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:26:22.465151   67607 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0829 20:26:22.465218   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.476104   67607 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:26:22.476177   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.486627   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.497782   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.509869   67607 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:26:22.521347   67607 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:26:22.531406   67607 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:26:22.531455   67607 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:26:22.544949   67607 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:26:22.554918   67607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:22.687909   67607 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:26:22.808522   67607 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:26:22.808595   67607 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:26:22.814348   67607 start.go:563] Will wait 60s for crictl version
	I0829 20:26:22.814411   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:22.818348   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:26:22.863797   67607 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:26:22.863883   67607 ssh_runner.go:195] Run: crio --version
	I0829 20:26:22.893173   67607 ssh_runner.go:195] Run: crio --version
	I0829 20:26:22.923146   67607 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0829 20:26:22.924299   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:22.927222   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:22.927564   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:22.927589   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:22.927772   67607 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 20:26:22.932100   67607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:22.945139   67607 kubeadm.go:883] updating cluster {Name:old-k8s-version-032002 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:26:22.945274   67607 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 20:26:22.945334   67607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:22.990592   67607 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 20:26:22.990668   67607 ssh_runner.go:195] Run: which lz4
	I0829 20:26:22.995104   67607 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 20:26:22.999667   67607 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 20:26:22.999703   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0829 20:26:24.727216   67607 crio.go:462] duration metric: took 1.732148589s to copy over tarball
	I0829 20:26:24.727294   67607 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 20:26:27.715640   67607 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.988318238s)
	I0829 20:26:27.715664   67607 crio.go:469] duration metric: took 2.988419957s to extract the tarball
	I0829 20:26:27.715672   67607 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 20:26:27.764192   67607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:27.797388   67607 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 20:26:27.797422   67607 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 20:26:27.797501   67607 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:27.797536   67607 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0829 20:26:27.797549   67607 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:27.797557   67607 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0829 20:26:27.797511   67607 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:27.797629   67607 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:27.797637   67607 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:27.797519   67607 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:27.799128   67607 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:27.799208   67607 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0829 20:26:27.799251   67607 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0829 20:26:27.799361   67607 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:27.799386   67607 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:27.799463   67607 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:27.799697   67607 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:27.799830   67607 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:27.978022   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:27.978296   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:27.981616   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:27.998987   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.001078   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.004185   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.004672   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0829 20:26:28.103885   67607 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0829 20:26:28.103953   67607 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.104013   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.122203   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:28.129983   67607 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0829 20:26:28.130028   67607 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.130076   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.165427   67607 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0829 20:26:28.165470   67607 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.165521   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.199971   67607 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0829 20:26:28.199990   67607 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0829 20:26:28.200015   67607 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.200021   67607 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.200062   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.200105   67607 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0829 20:26:28.200155   67607 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.200199   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.200204   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.200062   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.200113   67607 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0829 20:26:28.200325   67607 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0829 20:26:28.200356   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.329091   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.329139   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.329187   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.329260   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.329316   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.329362   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:26:28.329316   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.484805   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.484857   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.484888   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.484943   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:26:28.484963   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.485009   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.487351   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.615121   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.615187   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.645371   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.645433   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:26:28.645524   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.645573   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.645638   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0829 20:26:28.729141   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0829 20:26:28.762530   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0829 20:26:28.762592   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0829 20:26:28.782117   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0829 20:26:28.782155   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0829 20:26:28.782195   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0829 20:26:28.782229   67607 cache_images.go:92] duration metric: took 984.791099ms to LoadCachedImages
	W0829 20:26:28.782293   67607 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0829 20:26:28.782310   67607 kubeadm.go:934] updating node { 192.168.39.116 8443 v1.20.0 crio true true} ...
	I0829 20:26:28.782452   67607 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-032002 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:26:28.782518   67607 ssh_runner.go:195] Run: crio config
	I0829 20:26:28.832785   67607 cni.go:84] Creating CNI manager for ""
	I0829 20:26:28.832807   67607 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:28.832824   67607 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:26:28.832843   67607 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.116 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-032002 NodeName:old-k8s-version-032002 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0829 20:26:28.832982   67607 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-032002"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:26:28.833059   67607 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0829 20:26:28.843483   67607 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:26:28.843566   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:26:28.853276   67607 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0829 20:26:28.870579   67607 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:26:28.888053   67607 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0829 20:26:28.905988   67607 ssh_runner.go:195] Run: grep 192.168.39.116	control-plane.minikube.internal$ /etc/hosts
	I0829 20:26:28.910048   67607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:28.924996   67607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:29.075015   67607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:29.095381   67607 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002 for IP: 192.168.39.116
	I0829 20:26:29.095411   67607 certs.go:194] generating shared ca certs ...
	I0829 20:26:29.095430   67607 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:29.095605   67607 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:26:29.095686   67607 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:26:29.095706   67607 certs.go:256] generating profile certs ...
	I0829 20:26:29.095847   67607 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.key
	I0829 20:26:29.095928   67607 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.key.a1a2aebb
	I0829 20:26:29.095984   67607 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.key
	I0829 20:26:29.096135   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:26:29.096184   67607 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:26:29.096198   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:26:29.096227   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:26:29.096259   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:26:29.096299   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:26:29.096378   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:29.097276   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:26:29.144259   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:26:29.171420   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:26:29.198554   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:26:29.230750   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0829 20:26:29.269978   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 20:26:29.299839   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:26:29.333742   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 20:26:29.358352   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:26:29.382648   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:26:29.406773   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:26:29.434106   67607 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:26:29.451913   67607 ssh_runner.go:195] Run: openssl version
	I0829 20:26:29.457722   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:26:29.469147   67607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:26:29.474048   67607 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:26:29.474094   67607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:26:29.480082   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:26:29.491083   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:26:29.501994   67607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:29.508594   67607 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:29.508643   67607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:29.516331   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:26:29.531067   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:26:29.543998   67607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:26:29.548781   67607 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:26:29.548845   67607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:26:29.555052   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:26:29.567902   67607 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:26:29.572879   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:26:29.579506   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:26:29.585887   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:26:29.592262   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:26:29.598566   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:26:29.604672   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:26:29.610830   67607 kubeadm.go:392] StartCluster: {Name:old-k8s-version-032002 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:26:29.612915   67607 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:26:29.613015   67607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:29.655224   67607 cri.go:89] found id: ""
	I0829 20:26:29.655314   67607 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:26:29.666216   67607 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:26:29.666241   67607 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:26:29.666292   67607 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:26:29.676908   67607 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:26:29.678276   67607 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-032002" does not appear in /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:26:29.679313   67607 kubeconfig.go:62] /home/jenkins/minikube-integration/19530-11185/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-032002" cluster setting kubeconfig missing "old-k8s-version-032002" context setting]
	I0829 20:26:29.680756   67607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:29.764872   67607 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:26:29.776873   67607 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.116
	I0829 20:26:29.776914   67607 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:26:29.776926   67607 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:26:29.776987   67607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:29.819268   67607 cri.go:89] found id: ""
	I0829 20:26:29.819347   67607 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:26:29.840386   67607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:26:29.851624   67607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:26:29.851650   67607 kubeadm.go:157] found existing configuration files:
	
	I0829 20:26:29.851710   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:26:29.861439   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:26:29.861504   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:26:29.871594   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:26:29.881126   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:26:29.881199   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:26:29.890984   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:26:29.900838   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:26:29.900913   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:26:29.910677   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:26:29.920008   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:26:29.920073   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:26:29.929631   67607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:26:29.939864   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:30.096029   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:30.816696   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:31.043310   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:31.139291   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:31.248095   67607 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:26:31.248190   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:31.749101   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:32.248718   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:32.748783   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:33.248254   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:33.748557   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:34.249231   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:34.748279   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:35.249171   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:35.748943   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:36.249181   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:36.748307   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:37.248484   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:37.748261   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:38.248332   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:38.748423   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:39.248306   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:39.748958   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:40.248975   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:40.748948   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:41.249144   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:41.749013   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:42.248363   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:42.748624   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:43.248833   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:43.748535   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:44.248615   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:44.748528   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:45.248257   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:45.748453   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:46.248927   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:46.748628   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:47.248556   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:47.748332   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.248373   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.749111   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.248291   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.748360   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:50.248427   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:50.749087   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:51.248381   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:51.748488   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:52.249250   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:52.748715   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:53.249248   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:53.748915   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:54.248998   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:54.748438   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:55.249066   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:55.749293   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:56.248457   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:56.748509   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:57.248949   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:57.748228   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:58.248717   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:58.748412   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:59.248692   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:59.748815   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:00.248257   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:00.748264   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:01.249241   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:01.748894   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:02.249045   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:02.748765   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:03.248902   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:03.748333   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:04.249082   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:04.748738   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:05.248398   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:05.749056   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:06.248693   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:06.748904   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:07.249145   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:07.749131   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:08.248774   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:08.748444   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:09.248746   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:09.748722   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:10.249074   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:10.748647   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:11.248236   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:11.749057   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:12.249227   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:12.748688   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:13.249248   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:13.749298   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:14.249254   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:14.748957   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:15.249229   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:15.749137   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:16.248967   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:16.748254   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:17.248929   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:17.748339   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:18.248666   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:18.748712   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:19.248924   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:19.748958   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:20.248851   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:20.748547   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:21.248298   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:21.748802   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:22.248680   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:22.748271   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:23.248491   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:23.748803   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:24.248456   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:24.748347   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:25.248337   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:25.748905   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:26.248912   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:26.749302   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:27.249058   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:27.749105   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:28.248548   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:28.748298   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:29.248994   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:29.749020   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:30.248983   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:30.748247   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:31.249052   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:31.249133   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:31.293442   67607 cri.go:89] found id: ""
	I0829 20:27:31.293466   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.293473   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:31.293479   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:31.293527   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:31.333976   67607 cri.go:89] found id: ""
	I0829 20:27:31.333999   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.334006   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:31.334011   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:31.334055   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:31.373680   67607 cri.go:89] found id: ""
	I0829 20:27:31.373707   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.373715   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:31.373720   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:31.373766   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:31.407798   67607 cri.go:89] found id: ""
	I0829 20:27:31.407824   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.407832   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:31.407837   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:31.407893   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:31.444409   67607 cri.go:89] found id: ""
	I0829 20:27:31.444437   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.444445   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:31.444451   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:31.444512   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:31.479313   67607 cri.go:89] found id: ""
	I0829 20:27:31.479333   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.479341   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:31.479347   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:31.479403   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:31.516056   67607 cri.go:89] found id: ""
	I0829 20:27:31.516089   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.516100   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:31.516108   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:31.516168   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:31.555324   67607 cri.go:89] found id: ""
	I0829 20:27:31.555349   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.555357   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:31.555365   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:31.555375   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:31.626397   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:31.626434   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:31.672006   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:31.672038   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:31.724691   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:31.724727   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:31.740283   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:31.740324   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:31.874007   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:34.374203   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:34.387817   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:34.387888   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:34.423254   67607 cri.go:89] found id: ""
	I0829 20:27:34.423279   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.423286   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:34.423296   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:34.423343   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:34.457741   67607 cri.go:89] found id: ""
	I0829 20:27:34.457768   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.457775   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:34.457781   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:34.457827   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:34.498432   67607 cri.go:89] found id: ""
	I0829 20:27:34.498457   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.498464   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:34.498469   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:34.498523   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:34.534290   67607 cri.go:89] found id: ""
	I0829 20:27:34.534317   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.534324   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:34.534330   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:34.534380   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:34.570878   67607 cri.go:89] found id: ""
	I0829 20:27:34.570909   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.570919   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:34.570928   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:34.570986   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:34.615735   67607 cri.go:89] found id: ""
	I0829 20:27:34.615762   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.615769   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:34.615775   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:34.615824   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:34.656667   67607 cri.go:89] found id: ""
	I0829 20:27:34.656706   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.656721   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:34.656730   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:34.656779   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:34.708906   67607 cri.go:89] found id: ""
	I0829 20:27:34.708928   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.708937   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:34.708947   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:34.708962   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:34.767382   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:34.767417   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:34.786523   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:34.786574   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:34.872832   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:34.872857   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:34.872871   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:34.954581   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:34.954620   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:37.497810   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:37.511479   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:37.511539   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:37.547930   67607 cri.go:89] found id: ""
	I0829 20:27:37.547962   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.547972   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:37.547980   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:37.548035   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:37.585281   67607 cri.go:89] found id: ""
	I0829 20:27:37.585304   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.585312   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:37.585318   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:37.585365   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:37.622201   67607 cri.go:89] found id: ""
	I0829 20:27:37.622229   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.622241   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:37.622246   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:37.622295   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:37.657248   67607 cri.go:89] found id: ""
	I0829 20:27:37.657274   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.657281   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:37.657289   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:37.657335   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:37.691674   67607 cri.go:89] found id: ""
	I0829 20:27:37.691703   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.691711   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:37.691716   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:37.691764   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:37.729523   67607 cri.go:89] found id: ""
	I0829 20:27:37.729548   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.729557   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:37.729562   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:37.729609   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:37.764601   67607 cri.go:89] found id: ""
	I0829 20:27:37.764629   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.764637   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:37.764643   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:37.764705   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:37.799228   67607 cri.go:89] found id: ""
	I0829 20:27:37.799259   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.799270   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:37.799281   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:37.799301   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:37.848128   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:37.848158   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:37.862610   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:37.862640   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:37.936859   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:37.936888   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:37.936903   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:38.013647   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:38.013681   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:40.551395   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:40.568100   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:40.568181   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:40.616582   67607 cri.go:89] found id: ""
	I0829 20:27:40.616611   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.616623   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:40.616631   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:40.616695   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:40.690580   67607 cri.go:89] found id: ""
	I0829 20:27:40.690620   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.690631   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:40.690638   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:40.690695   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:40.733624   67607 cri.go:89] found id: ""
	I0829 20:27:40.733653   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.733662   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:40.733670   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:40.733733   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:40.767499   67607 cri.go:89] found id: ""
	I0829 20:27:40.767528   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.767538   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:40.767546   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:40.767619   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:40.806973   67607 cri.go:89] found id: ""
	I0829 20:27:40.807002   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.807009   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:40.807015   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:40.807079   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:40.842311   67607 cri.go:89] found id: ""
	I0829 20:27:40.842334   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.842341   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:40.842347   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:40.842401   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:40.880208   67607 cri.go:89] found id: ""
	I0829 20:27:40.880238   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.880248   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:40.880255   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:40.880309   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:40.918395   67607 cri.go:89] found id: ""
	I0829 20:27:40.918424   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.918435   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:40.918445   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:40.918459   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:40.972396   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:40.972437   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:40.986136   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:40.986169   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:41.064600   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:41.064623   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:41.064634   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:41.146653   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:41.146687   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:43.687773   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:43.701576   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:43.701645   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:43.737259   67607 cri.go:89] found id: ""
	I0829 20:27:43.737282   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.737289   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:43.737299   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:43.737346   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:43.772678   67607 cri.go:89] found id: ""
	I0829 20:27:43.772702   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.772709   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:43.772714   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:43.772776   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:43.806788   67607 cri.go:89] found id: ""
	I0829 20:27:43.806821   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.806831   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:43.806839   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:43.806900   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:43.841738   67607 cri.go:89] found id: ""
	I0829 20:27:43.841759   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.841767   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:43.841772   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:43.841829   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:43.878420   67607 cri.go:89] found id: ""
	I0829 20:27:43.878449   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.878459   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:43.878466   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:43.878527   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:43.914307   67607 cri.go:89] found id: ""
	I0829 20:27:43.914335   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.914345   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:43.914352   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:43.914413   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:43.958827   67607 cri.go:89] found id: ""
	I0829 20:27:43.958853   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.958865   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:43.958871   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:43.958935   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:43.997397   67607 cri.go:89] found id: ""
	I0829 20:27:43.997423   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.997432   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:43.997442   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:43.997455   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:44.049245   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:44.049280   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:44.063473   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:44.063511   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:44.131628   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:44.131651   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:44.131666   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:44.210826   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:44.210854   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:46.754905   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:46.769531   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:46.769588   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:46.805245   67607 cri.go:89] found id: ""
	I0829 20:27:46.805272   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.805280   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:46.805285   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:46.805338   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:46.843606   67607 cri.go:89] found id: ""
	I0829 20:27:46.843637   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.843646   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:46.843654   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:46.843710   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:46.880300   67607 cri.go:89] found id: ""
	I0829 20:27:46.880326   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.880333   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:46.880338   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:46.880387   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:46.923537   67607 cri.go:89] found id: ""
	I0829 20:27:46.923562   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.923569   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:46.923574   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:46.923620   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:46.957774   67607 cri.go:89] found id: ""
	I0829 20:27:46.957806   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.957817   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:46.957826   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:46.957887   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:46.996972   67607 cri.go:89] found id: ""
	I0829 20:27:46.996995   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.997005   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:46.997013   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:46.997056   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:47.030560   67607 cri.go:89] found id: ""
	I0829 20:27:47.030588   67607 logs.go:276] 0 containers: []
	W0829 20:27:47.030606   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:47.030612   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:47.030665   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:47.068654   67607 cri.go:89] found id: ""
	I0829 20:27:47.068678   67607 logs.go:276] 0 containers: []
	W0829 20:27:47.068686   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:47.068694   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:47.068706   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:47.082335   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:47.082367   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:47.162792   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:47.162817   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:47.162829   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:47.241456   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:47.241491   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:47.282249   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:47.282274   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:49.836268   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:49.850415   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:49.850491   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:49.887816   67607 cri.go:89] found id: ""
	I0829 20:27:49.887843   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.887851   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:49.887856   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:49.887916   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:49.923701   67607 cri.go:89] found id: ""
	I0829 20:27:49.923735   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.923745   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:49.923755   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:49.923818   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:49.958197   67607 cri.go:89] found id: ""
	I0829 20:27:49.958225   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.958236   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:49.958244   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:49.958313   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:49.995333   67607 cri.go:89] found id: ""
	I0829 20:27:49.995361   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.995373   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:49.995380   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:49.995439   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:50.034345   67607 cri.go:89] found id: ""
	I0829 20:27:50.034375   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.034382   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:50.034387   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:50.034438   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:50.070324   67607 cri.go:89] found id: ""
	I0829 20:27:50.070355   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.070365   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:50.070374   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:50.070434   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:50.107301   67607 cri.go:89] found id: ""
	I0829 20:27:50.107326   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.107334   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:50.107340   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:50.107400   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:50.144748   67607 cri.go:89] found id: ""
	I0829 20:27:50.144778   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.144788   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:50.144800   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:50.144816   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:50.183576   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:50.183606   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:50.236716   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:50.236750   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:50.251589   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:50.251612   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:50.317816   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:50.317840   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:50.317855   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:52.894572   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:52.908081   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:52.908149   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:52.945272   67607 cri.go:89] found id: ""
	I0829 20:27:52.945299   67607 logs.go:276] 0 containers: []
	W0829 20:27:52.945309   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:52.945317   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:52.945377   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:52.980237   67607 cri.go:89] found id: ""
	I0829 20:27:52.980262   67607 logs.go:276] 0 containers: []
	W0829 20:27:52.980270   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:52.980275   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:52.980325   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:53.017894   67607 cri.go:89] found id: ""
	I0829 20:27:53.017922   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.017929   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:53.017935   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:53.017991   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:53.052577   67607 cri.go:89] found id: ""
	I0829 20:27:53.052603   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.052611   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:53.052616   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:53.052667   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:53.093414   67607 cri.go:89] found id: ""
	I0829 20:27:53.093444   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.093455   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:53.093462   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:53.093523   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:53.130794   67607 cri.go:89] found id: ""
	I0829 20:27:53.130825   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.130837   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:53.130845   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:53.130902   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:53.163793   67607 cri.go:89] found id: ""
	I0829 20:27:53.163819   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.163827   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:53.163832   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:53.163882   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:53.204824   67607 cri.go:89] found id: ""
	I0829 20:27:53.204852   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.204862   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:53.204872   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:53.204885   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:53.243411   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:53.243440   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:53.296611   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:53.296642   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:53.310909   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:53.310943   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:53.385768   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:53.385790   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:53.385801   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:55.966801   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:55.980852   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:55.980933   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:56.017682   67607 cri.go:89] found id: ""
	I0829 20:27:56.017707   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.017716   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:56.017722   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:56.017767   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:56.051556   67607 cri.go:89] found id: ""
	I0829 20:27:56.051584   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.051594   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:56.051600   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:56.051665   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:56.095301   67607 cri.go:89] found id: ""
	I0829 20:27:56.095330   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.095340   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:56.095348   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:56.095408   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:56.131161   67607 cri.go:89] found id: ""
	I0829 20:27:56.131195   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.131205   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:56.131213   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:56.131269   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:56.166611   67607 cri.go:89] found id: ""
	I0829 20:27:56.166637   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.166645   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:56.166651   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:56.166713   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:56.202818   67607 cri.go:89] found id: ""
	I0829 20:27:56.202846   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.202856   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:56.202864   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:56.202923   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:56.237855   67607 cri.go:89] found id: ""
	I0829 20:27:56.237883   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.237891   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:56.237897   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:56.237955   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:56.272402   67607 cri.go:89] found id: ""
	I0829 20:27:56.272426   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.272433   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:56.272441   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:56.272452   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:56.351628   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:56.351653   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:56.389525   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:56.389559   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:56.444952   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:56.444989   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:56.459731   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:56.459759   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:56.536888   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:59.037744   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:59.051868   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:59.051938   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:59.087436   67607 cri.go:89] found id: ""
	I0829 20:27:59.087461   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.087467   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:59.087474   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:59.087531   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:59.123729   67607 cri.go:89] found id: ""
	I0829 20:27:59.123757   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.123765   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:59.123771   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:59.123825   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:59.168649   67607 cri.go:89] found id: ""
	I0829 20:27:59.168682   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.168690   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:59.168696   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:59.168753   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:59.209770   67607 cri.go:89] found id: ""
	I0829 20:27:59.209791   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.209803   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:59.209808   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:59.209854   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:59.248358   67607 cri.go:89] found id: ""
	I0829 20:27:59.248384   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.248392   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:59.248398   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:59.248445   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:59.281770   67607 cri.go:89] found id: ""
	I0829 20:27:59.281797   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.281805   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:59.281811   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:59.281870   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:59.317255   67607 cri.go:89] found id: ""
	I0829 20:27:59.317285   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.317295   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:59.317302   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:59.317363   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:59.354301   67607 cri.go:89] found id: ""
	I0829 20:27:59.354324   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.354332   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:59.354339   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:59.354352   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:59.438346   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:59.438382   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:59.482482   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:59.482513   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:59.540926   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:59.540961   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:59.555221   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:59.555258   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:59.622114   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:02.123276   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:02.137435   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:02.137502   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:02.176310   67607 cri.go:89] found id: ""
	I0829 20:28:02.176340   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.176347   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:02.176355   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:02.176414   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:02.216511   67607 cri.go:89] found id: ""
	I0829 20:28:02.216555   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.216562   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:02.216574   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:02.216625   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:02.260116   67607 cri.go:89] found id: ""
	I0829 20:28:02.260149   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.260158   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:02.260164   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:02.260225   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:02.301550   67607 cri.go:89] found id: ""
	I0829 20:28:02.301584   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.301600   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:02.301608   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:02.301692   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:02.335916   67607 cri.go:89] found id: ""
	I0829 20:28:02.335948   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.335959   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:02.335967   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:02.336033   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:02.372479   67607 cri.go:89] found id: ""
	I0829 20:28:02.372507   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.372515   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:02.372522   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:02.372584   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:02.406683   67607 cri.go:89] found id: ""
	I0829 20:28:02.406713   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.406721   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:02.406727   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:02.406774   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:02.443130   67607 cri.go:89] found id: ""
	I0829 20:28:02.443156   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.443164   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:02.443173   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:02.443185   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:02.485747   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:02.485777   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:02.540106   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:02.540143   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:02.556158   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:02.556188   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:02.637870   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:02.637900   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:02.637915   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:05.220330   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:05.233932   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:05.233994   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:05.269046   67607 cri.go:89] found id: ""
	I0829 20:28:05.269072   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.269081   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:05.269087   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:05.269134   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:05.303963   67607 cri.go:89] found id: ""
	I0829 20:28:05.303989   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.303999   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:05.304006   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:05.304065   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:05.340943   67607 cri.go:89] found id: ""
	I0829 20:28:05.340975   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.340985   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:05.340992   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:05.341061   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:05.379551   67607 cri.go:89] found id: ""
	I0829 20:28:05.379582   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.379593   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:05.379601   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:05.379659   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:05.414229   67607 cri.go:89] found id: ""
	I0829 20:28:05.414256   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.414267   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:05.414274   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:05.414339   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:05.450212   67607 cri.go:89] found id: ""
	I0829 20:28:05.450241   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.450251   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:05.450258   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:05.450318   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:05.487415   67607 cri.go:89] found id: ""
	I0829 20:28:05.487451   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.487463   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:05.487470   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:05.487529   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:05.521347   67607 cri.go:89] found id: ""
	I0829 20:28:05.521370   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.521383   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:05.521390   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:05.521402   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:05.572317   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:05.572350   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:05.585651   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:05.585680   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:05.653929   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:05.653950   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:05.653969   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:05.732843   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:05.732873   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:08.281983   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:08.295104   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:08.295166   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:08.328570   67607 cri.go:89] found id: ""
	I0829 20:28:08.328596   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.328605   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:08.328613   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:08.328684   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:08.363567   67607 cri.go:89] found id: ""
	I0829 20:28:08.363595   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.363605   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:08.363613   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:08.363672   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:08.399619   67607 cri.go:89] found id: ""
	I0829 20:28:08.399645   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.399653   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:08.399659   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:08.399707   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:08.439252   67607 cri.go:89] found id: ""
	I0829 20:28:08.439283   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.439294   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:08.439301   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:08.439357   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:08.477730   67607 cri.go:89] found id: ""
	I0829 20:28:08.477754   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.477762   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:08.477768   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:08.477834   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:08.522045   67607 cri.go:89] found id: ""
	I0829 20:28:08.522066   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.522073   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:08.522079   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:08.522137   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:08.560400   67607 cri.go:89] found id: ""
	I0829 20:28:08.560427   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.560434   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:08.560441   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:08.560504   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:08.599111   67607 cri.go:89] found id: ""
	I0829 20:28:08.599140   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.599150   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:08.599161   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:08.599175   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:08.681451   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:08.681487   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:08.722800   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:08.722835   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:08.779058   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:08.779089   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:08.796940   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:08.796963   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:08.868296   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:11.369316   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:11.384150   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:11.384225   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:11.418452   67607 cri.go:89] found id: ""
	I0829 20:28:11.418480   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.418488   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:11.418494   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:11.418555   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:11.451359   67607 cri.go:89] found id: ""
	I0829 20:28:11.451389   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.451400   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:11.451408   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:11.451481   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:11.488408   67607 cri.go:89] found id: ""
	I0829 20:28:11.488436   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.488446   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:11.488453   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:11.488510   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:11.528311   67607 cri.go:89] found id: ""
	I0829 20:28:11.528340   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.528351   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:11.528359   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:11.528412   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:11.571345   67607 cri.go:89] found id: ""
	I0829 20:28:11.571372   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.571382   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:11.571389   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:11.571454   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:11.606812   67607 cri.go:89] found id: ""
	I0829 20:28:11.606839   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.606850   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:11.606857   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:11.606918   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:11.652687   67607 cri.go:89] found id: ""
	I0829 20:28:11.652710   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.652717   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:11.652722   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:11.652781   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:11.687583   67607 cri.go:89] found id: ""
	I0829 20:28:11.687628   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.687645   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:11.687655   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:11.687673   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:11.727052   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:11.727086   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:11.779116   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:11.779155   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:11.792911   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:11.792949   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:11.868415   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:11.868443   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:11.868461   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:14.447886   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:14.462144   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:14.462221   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:14.499160   67607 cri.go:89] found id: ""
	I0829 20:28:14.499185   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.499193   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:14.499200   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:14.499258   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:14.545736   67607 cri.go:89] found id: ""
	I0829 20:28:14.545764   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.545774   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:14.545780   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:14.545844   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:14.583626   67607 cri.go:89] found id: ""
	I0829 20:28:14.583664   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.583674   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:14.583682   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:14.583744   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:14.619876   67607 cri.go:89] found id: ""
	I0829 20:28:14.619909   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.619917   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:14.619923   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:14.619975   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:14.655750   67607 cri.go:89] found id: ""
	I0829 20:28:14.655778   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.655786   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:14.655791   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:14.655848   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:14.690759   67607 cri.go:89] found id: ""
	I0829 20:28:14.690785   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.690795   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:14.690800   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:14.690850   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:14.727238   67607 cri.go:89] found id: ""
	I0829 20:28:14.727269   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.727282   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:14.727289   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:14.727344   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:14.765962   67607 cri.go:89] found id: ""
	I0829 20:28:14.765996   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.766006   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:14.766017   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:14.766033   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:14.835749   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:14.835779   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:14.835797   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:14.914075   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:14.914112   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:14.952684   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:14.952712   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:15.004598   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:15.004635   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:17.518949   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:17.532175   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:17.532250   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:17.569943   67607 cri.go:89] found id: ""
	I0829 20:28:17.569971   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.569979   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:17.569985   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:17.570044   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:17.605472   67607 cri.go:89] found id: ""
	I0829 20:28:17.605502   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.605510   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:17.605515   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:17.605566   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:17.641568   67607 cri.go:89] found id: ""
	I0829 20:28:17.641593   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.641603   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:17.641610   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:17.641669   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:17.680870   67607 cri.go:89] found id: ""
	I0829 20:28:17.680895   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.680905   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:17.680916   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:17.680981   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:17.723546   67607 cri.go:89] found id: ""
	I0829 20:28:17.723576   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.723587   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:17.723594   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:17.723659   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:17.757934   67607 cri.go:89] found id: ""
	I0829 20:28:17.757962   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.757973   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:17.757980   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:17.758028   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:17.792641   67607 cri.go:89] found id: ""
	I0829 20:28:17.792670   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.792679   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:17.792685   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:17.792738   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:17.830776   67607 cri.go:89] found id: ""
	I0829 20:28:17.830800   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.830807   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:17.830815   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:17.830825   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:17.886331   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:17.886377   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:17.900111   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:17.900135   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:17.969538   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:17.969563   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:17.969577   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:18.050609   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:18.050649   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:20.590686   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:20.605066   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:20.605121   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:20.646028   67607 cri.go:89] found id: ""
	I0829 20:28:20.646058   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.646074   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:20.646082   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:20.646143   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:20.683433   67607 cri.go:89] found id: ""
	I0829 20:28:20.683469   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.683479   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:20.683487   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:20.683567   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:20.722737   67607 cri.go:89] found id: ""
	I0829 20:28:20.722765   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.722775   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:20.722782   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:20.722841   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:20.759777   67607 cri.go:89] found id: ""
	I0829 20:28:20.759800   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.759807   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:20.759812   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:20.759864   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:20.799142   67607 cri.go:89] found id: ""
	I0829 20:28:20.799164   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.799170   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:20.799176   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:20.799223   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:20.838331   67607 cri.go:89] found id: ""
	I0829 20:28:20.838357   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.838365   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:20.838371   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:20.838427   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:20.878066   67607 cri.go:89] found id: ""
	I0829 20:28:20.878099   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.878110   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:20.878117   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:20.878175   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:20.928940   67607 cri.go:89] found id: ""
	I0829 20:28:20.928966   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.928975   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:20.928982   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:20.928993   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:20.984435   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:20.984471   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:21.005860   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:21.005900   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:21.084092   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:21.084123   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:21.084138   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:21.165971   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:21.166009   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:23.705033   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:23.718332   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:23.718390   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:23.753594   67607 cri.go:89] found id: ""
	I0829 20:28:23.753625   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.753635   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:23.753650   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:23.753715   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:23.791840   67607 cri.go:89] found id: ""
	I0829 20:28:23.791864   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.791872   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:23.791878   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:23.791930   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:23.837815   67607 cri.go:89] found id: ""
	I0829 20:28:23.837839   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.837846   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:23.837851   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:23.837908   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:23.873155   67607 cri.go:89] found id: ""
	I0829 20:28:23.873184   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.873194   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:23.873201   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:23.873265   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:23.908728   67607 cri.go:89] found id: ""
	I0829 20:28:23.908757   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.908768   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:23.908774   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:23.908834   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:23.946286   67607 cri.go:89] found id: ""
	I0829 20:28:23.946310   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.946320   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:23.946328   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:23.946392   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:23.983078   67607 cri.go:89] found id: ""
	I0829 20:28:23.983105   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.983115   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:23.983129   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:23.983190   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:24.020601   67607 cri.go:89] found id: ""
	I0829 20:28:24.020634   67607 logs.go:276] 0 containers: []
	W0829 20:28:24.020644   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:24.020654   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:24.020669   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:24.034438   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:24.034463   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:24.103209   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:24.103230   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:24.103243   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:24.182977   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:24.183016   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:24.224743   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:24.224834   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:26.781507   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:26.794301   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:26.794387   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:26.827218   67607 cri.go:89] found id: ""
	I0829 20:28:26.827243   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.827250   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:26.827257   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:26.827303   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:26.862643   67607 cri.go:89] found id: ""
	I0829 20:28:26.862673   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.862685   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:26.862693   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:26.862743   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:26.898127   67607 cri.go:89] found id: ""
	I0829 20:28:26.898159   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.898169   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:26.898177   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:26.898237   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:26.932119   67607 cri.go:89] found id: ""
	I0829 20:28:26.932146   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.932167   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:26.932174   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:26.932241   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:26.966380   67607 cri.go:89] found id: ""
	I0829 20:28:26.966413   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.966421   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:26.966427   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:26.966478   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:27.004350   67607 cri.go:89] found id: ""
	I0829 20:28:27.004372   67607 logs.go:276] 0 containers: []
	W0829 20:28:27.004379   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:27.004386   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:27.004436   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:27.041171   67607 cri.go:89] found id: ""
	I0829 20:28:27.041199   67607 logs.go:276] 0 containers: []
	W0829 20:28:27.041206   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:27.041212   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:27.041257   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:27.073993   67607 cri.go:89] found id: ""
	I0829 20:28:27.074031   67607 logs.go:276] 0 containers: []
	W0829 20:28:27.074041   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:27.074053   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:27.074066   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:27.148169   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:27.148199   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:27.148214   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:27.227174   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:27.227212   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:27.267180   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:27.267230   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:27.319034   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:27.319066   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:29.833497   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:29.846883   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:29.846951   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:29.884133   67607 cri.go:89] found id: ""
	I0829 20:28:29.884163   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.884175   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:29.884182   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:29.884247   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:29.917594   67607 cri.go:89] found id: ""
	I0829 20:28:29.917618   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.917628   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:29.917636   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:29.917696   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:29.952537   67607 cri.go:89] found id: ""
	I0829 20:28:29.952568   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.952576   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:29.952582   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:29.952630   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:29.988410   67607 cri.go:89] found id: ""
	I0829 20:28:29.988441   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.988448   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:29.988454   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:29.988511   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:30.026761   67607 cri.go:89] found id: ""
	I0829 20:28:30.026788   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.026796   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:30.026802   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:30.026861   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:30.063010   67607 cri.go:89] found id: ""
	I0829 20:28:30.063037   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.063046   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:30.063054   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:30.063109   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:30.098067   67607 cri.go:89] found id: ""
	I0829 20:28:30.098093   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.098101   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:30.098107   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:30.098161   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:30.132887   67607 cri.go:89] found id: ""
	I0829 20:28:30.132914   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.132921   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:30.132928   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:30.132940   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:30.184955   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:30.184990   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:30.198966   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:30.199004   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:30.268950   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:30.268977   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:30.268991   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:30.354222   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:30.354260   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:32.896554   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:32.911188   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:32.911271   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:32.945726   67607 cri.go:89] found id: ""
	I0829 20:28:32.945750   67607 logs.go:276] 0 containers: []
	W0829 20:28:32.945758   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:32.945773   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:32.945829   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:32.980234   67607 cri.go:89] found id: ""
	I0829 20:28:32.980267   67607 logs.go:276] 0 containers: []
	W0829 20:28:32.980275   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:32.980281   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:32.980329   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:33.019031   67607 cri.go:89] found id: ""
	I0829 20:28:33.019063   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.019071   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:33.019076   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:33.019126   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:33.056290   67607 cri.go:89] found id: ""
	I0829 20:28:33.056314   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.056322   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:33.056327   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:33.056391   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:33.090038   67607 cri.go:89] found id: ""
	I0829 20:28:33.090068   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.090078   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:33.090086   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:33.090152   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:33.125742   67607 cri.go:89] found id: ""
	I0829 20:28:33.125774   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.125782   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:33.125787   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:33.125849   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:33.159019   67607 cri.go:89] found id: ""
	I0829 20:28:33.159047   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.159058   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:33.159065   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:33.159125   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:33.197900   67607 cri.go:89] found id: ""
	I0829 20:28:33.197925   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.197933   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:33.197941   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:33.197955   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:33.250010   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:33.250040   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:33.263348   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:33.263374   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:33.342037   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:33.342065   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:33.342082   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:33.423324   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:33.423361   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:35.963734   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:35.978648   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:35.978713   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:36.015326   67607 cri.go:89] found id: ""
	I0829 20:28:36.015350   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.015358   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:36.015364   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:36.015411   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:36.050840   67607 cri.go:89] found id: ""
	I0829 20:28:36.050869   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.050879   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:36.050886   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:36.050947   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:36.084048   67607 cri.go:89] found id: ""
	I0829 20:28:36.084076   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.084084   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:36.084090   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:36.084138   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:36.118655   67607 cri.go:89] found id: ""
	I0829 20:28:36.118682   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.118693   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:36.118702   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:36.118762   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:36.153879   67607 cri.go:89] found id: ""
	I0829 20:28:36.153908   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.153918   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:36.153926   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:36.153988   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:36.199834   67607 cri.go:89] found id: ""
	I0829 20:28:36.199858   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.199866   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:36.199872   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:36.199927   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:36.238098   67607 cri.go:89] found id: ""
	I0829 20:28:36.238129   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.238139   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:36.238146   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:36.238208   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:36.272091   67607 cri.go:89] found id: ""
	I0829 20:28:36.272124   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.272135   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:36.272146   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:36.272162   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:36.338478   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:36.338498   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:36.338510   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:36.418637   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:36.418671   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:36.458167   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:36.458194   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:36.508592   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:36.508630   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:39.022668   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:39.035897   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:39.035971   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:39.071155   67607 cri.go:89] found id: ""
	I0829 20:28:39.071185   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.071196   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:39.071203   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:39.071258   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:39.104135   67607 cri.go:89] found id: ""
	I0829 20:28:39.104177   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.104188   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:39.104206   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:39.104266   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:39.138301   67607 cri.go:89] found id: ""
	I0829 20:28:39.138329   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.138339   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:39.138346   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:39.138404   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:39.172674   67607 cri.go:89] found id: ""
	I0829 20:28:39.172700   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.172708   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:39.172719   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:39.172779   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:39.209810   67607 cri.go:89] found id: ""
	I0829 20:28:39.209836   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.209845   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:39.209852   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:39.209915   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:39.248692   67607 cri.go:89] found id: ""
	I0829 20:28:39.248715   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.248722   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:39.248728   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:39.248798   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:39.284303   67607 cri.go:89] found id: ""
	I0829 20:28:39.284333   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.284343   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:39.284351   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:39.284401   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:39.321346   67607 cri.go:89] found id: ""
	I0829 20:28:39.321375   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.321386   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:39.321396   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:39.321410   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:39.334678   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:39.334710   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:39.421992   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:39.422014   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:39.422027   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:39.503250   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:39.503280   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:39.540623   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:39.540654   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:42.092131   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:42.105440   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:42.105498   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:42.140994   67607 cri.go:89] found id: ""
	I0829 20:28:42.141024   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.141034   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:42.141042   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:42.141102   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:42.175182   67607 cri.go:89] found id: ""
	I0829 20:28:42.175217   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.175228   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:42.175248   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:42.175319   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:42.209251   67607 cri.go:89] found id: ""
	I0829 20:28:42.209281   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.209291   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:42.209299   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:42.209362   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:42.247944   67607 cri.go:89] found id: ""
	I0829 20:28:42.247970   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.247977   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:42.247983   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:42.248028   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:42.285613   67607 cri.go:89] found id: ""
	I0829 20:28:42.285644   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.285651   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:42.285657   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:42.285722   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:42.319826   67607 cri.go:89] found id: ""
	I0829 20:28:42.319851   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.319858   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:42.319864   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:42.319928   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:42.357150   67607 cri.go:89] found id: ""
	I0829 20:28:42.357173   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.357182   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:42.357189   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:42.357243   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:42.392150   67607 cri.go:89] found id: ""
	I0829 20:28:42.392170   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.392178   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:42.392185   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:42.392197   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:42.469240   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:42.469271   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:42.469286   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:42.549165   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:42.549198   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:42.591900   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:42.591930   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:42.642593   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:42.642625   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:45.157092   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:45.170832   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:45.170916   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:45.207210   67607 cri.go:89] found id: ""
	I0829 20:28:45.207235   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.207244   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:45.207251   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:45.207308   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:45.245321   67607 cri.go:89] found id: ""
	I0829 20:28:45.245352   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.245362   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:45.245379   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:45.245448   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:45.280326   67607 cri.go:89] found id: ""
	I0829 20:28:45.280369   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.280381   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:45.280389   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:45.280451   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:45.318294   67607 cri.go:89] found id: ""
	I0829 20:28:45.318322   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.318333   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:45.318340   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:45.318411   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:45.352903   67607 cri.go:89] found id: ""
	I0829 20:28:45.352925   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.352932   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:45.352938   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:45.352990   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:45.389251   67607 cri.go:89] found id: ""
	I0829 20:28:45.389273   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.389280   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:45.389286   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:45.389340   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:45.424348   67607 cri.go:89] found id: ""
	I0829 20:28:45.424385   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.424397   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:45.424404   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:45.424453   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:45.459058   67607 cri.go:89] found id: ""
	I0829 20:28:45.459087   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.459098   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:45.459109   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:45.459124   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:45.510386   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:45.510423   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:45.524896   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:45.524923   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:45.593987   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:45.594064   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:45.594082   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:45.668738   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:45.668771   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:48.206497   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:48.219625   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:48.219696   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:48.254936   67607 cri.go:89] found id: ""
	I0829 20:28:48.254959   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.254966   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:48.254971   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:48.255018   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:48.290826   67607 cri.go:89] found id: ""
	I0829 20:28:48.290851   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.290859   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:48.290864   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:48.290910   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:48.327508   67607 cri.go:89] found id: ""
	I0829 20:28:48.327533   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.327540   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:48.327546   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:48.327593   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:48.364492   67607 cri.go:89] found id: ""
	I0829 20:28:48.364517   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.364525   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:48.364530   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:48.364580   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:48.400035   67607 cri.go:89] found id: ""
	I0829 20:28:48.400062   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.400072   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:48.400079   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:48.400144   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:48.433999   67607 cri.go:89] found id: ""
	I0829 20:28:48.434026   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.434035   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:48.434043   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:48.434104   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:48.468841   67607 cri.go:89] found id: ""
	I0829 20:28:48.468873   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.468889   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:48.468903   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:48.468971   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:48.506557   67607 cri.go:89] found id: ""
	I0829 20:28:48.506589   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.506598   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:48.506609   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:48.506624   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:48.577023   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:48.577044   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:48.577056   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:48.654372   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:48.654407   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:48.691125   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:48.691152   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:48.746383   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:48.746414   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:51.260591   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:51.273911   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:51.273974   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:51.311517   67607 cri.go:89] found id: ""
	I0829 20:28:51.311545   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.311553   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:51.311567   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:51.311616   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:51.348220   67607 cri.go:89] found id: ""
	I0829 20:28:51.348247   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.348256   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:51.348264   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:51.348321   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:51.383560   67607 cri.go:89] found id: ""
	I0829 20:28:51.383599   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.383611   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:51.383619   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:51.383680   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:51.419241   67607 cri.go:89] found id: ""
	I0829 20:28:51.419268   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.419278   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:51.419286   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:51.419343   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:51.453954   67607 cri.go:89] found id: ""
	I0829 20:28:51.453979   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.453986   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:51.453992   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:51.454047   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:51.489457   67607 cri.go:89] found id: ""
	I0829 20:28:51.489480   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.489488   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:51.489493   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:51.489544   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:51.524072   67607 cri.go:89] found id: ""
	I0829 20:28:51.524100   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.524107   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:51.524113   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:51.524160   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:51.561238   67607 cri.go:89] found id: ""
	I0829 20:28:51.561263   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.561271   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:51.561279   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:51.561290   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:51.615422   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:51.615462   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:51.632180   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:51.632216   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:51.704335   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:51.704363   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:51.704378   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:51.794219   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:51.794260   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:54.342556   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:54.356325   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:54.356400   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:54.390928   67607 cri.go:89] found id: ""
	I0829 20:28:54.390952   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.390959   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:54.390965   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:54.391011   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:54.426970   67607 cri.go:89] found id: ""
	I0829 20:28:54.427002   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.427013   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:54.427020   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:54.427074   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:54.464121   67607 cri.go:89] found id: ""
	I0829 20:28:54.464155   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.464166   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:54.464174   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:54.464236   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:54.499790   67607 cri.go:89] found id: ""
	I0829 20:28:54.499816   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.499827   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:54.499840   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:54.499889   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:54.537212   67607 cri.go:89] found id: ""
	I0829 20:28:54.537239   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.537249   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:54.537256   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:54.537314   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:54.575370   67607 cri.go:89] found id: ""
	I0829 20:28:54.575399   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.575410   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:54.575417   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:54.575469   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:54.608403   67607 cri.go:89] found id: ""
	I0829 20:28:54.608432   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.608443   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:54.608453   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:54.608514   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:54.645259   67607 cri.go:89] found id: ""
	I0829 20:28:54.645285   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.645292   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:54.645300   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:54.645311   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:54.697022   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:54.697063   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:54.712873   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:54.712914   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:54.814253   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:54.814278   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:54.814295   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:54.896473   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:54.896507   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:57.441648   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:57.455245   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:57.455321   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:57.495365   67607 cri.go:89] found id: ""
	I0829 20:28:57.495397   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.495405   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:57.495411   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:57.495472   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:57.529555   67607 cri.go:89] found id: ""
	I0829 20:28:57.529582   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.529590   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:57.529597   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:57.529667   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:57.564168   67607 cri.go:89] found id: ""
	I0829 20:28:57.564196   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.564208   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:57.564215   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:57.564277   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:57.602057   67607 cri.go:89] found id: ""
	I0829 20:28:57.602089   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.602100   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:57.602108   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:57.602194   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:57.638195   67607 cri.go:89] found id: ""
	I0829 20:28:57.638226   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.638235   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:57.638244   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:57.638307   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:57.674556   67607 cri.go:89] found id: ""
	I0829 20:28:57.674605   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.674615   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:57.674623   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:57.674680   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:57.709256   67607 cri.go:89] found id: ""
	I0829 20:28:57.709282   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.709291   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:57.709298   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:57.709358   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:57.743629   67607 cri.go:89] found id: ""
	I0829 20:28:57.743652   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.743659   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:57.743668   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:57.743679   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:57.789067   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:57.789098   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:57.843372   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:57.843403   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:57.858630   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:57.858661   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:57.927776   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:57.927798   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:57.927814   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:00.508180   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:00.521451   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:00.521529   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:00.557912   67607 cri.go:89] found id: ""
	I0829 20:29:00.557938   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.557945   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:00.557951   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:00.557997   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:00.595186   67607 cri.go:89] found id: ""
	I0829 20:29:00.595215   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.595226   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:00.595237   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:00.595299   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:00.631553   67607 cri.go:89] found id: ""
	I0829 20:29:00.631581   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.631592   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:00.631600   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:00.631660   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:00.666502   67607 cri.go:89] found id: ""
	I0829 20:29:00.666525   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.666551   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:00.666560   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:00.666621   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:00.700797   67607 cri.go:89] found id: ""
	I0829 20:29:00.700824   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.700835   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:00.700842   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:00.700908   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:00.739957   67607 cri.go:89] found id: ""
	I0829 20:29:00.739976   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.739989   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:00.739994   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:00.740035   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:00.800704   67607 cri.go:89] found id: ""
	I0829 20:29:00.800740   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.800750   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:00.800757   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:00.800820   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:00.837678   67607 cri.go:89] found id: ""
	I0829 20:29:00.837704   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.837712   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:00.837720   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:00.837731   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:00.888359   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:00.888391   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:00.903074   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:00.903103   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:00.964865   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:00.964885   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:00.964898   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:01.049351   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:01.049387   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:03.589829   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:03.603120   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:03.603192   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:03.637647   67607 cri.go:89] found id: ""
	I0829 20:29:03.637672   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.637678   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:03.637684   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:03.637732   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:03.673807   67607 cri.go:89] found id: ""
	I0829 20:29:03.673842   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.673852   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:03.673860   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:03.673918   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:03.709490   67607 cri.go:89] found id: ""
	I0829 20:29:03.709516   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.709527   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:03.709533   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:03.709595   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:03.751662   67607 cri.go:89] found id: ""
	I0829 20:29:03.751688   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.751696   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:03.751702   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:03.751751   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:03.787861   67607 cri.go:89] found id: ""
	I0829 20:29:03.787896   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.787908   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:03.787917   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:03.787977   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:03.824383   67607 cri.go:89] found id: ""
	I0829 20:29:03.824413   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.824431   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:03.824438   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:03.824499   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:03.863904   67607 cri.go:89] found id: ""
	I0829 20:29:03.863929   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.863937   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:03.863943   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:03.863990   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:03.902336   67607 cri.go:89] found id: ""
	I0829 20:29:03.902360   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.902368   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:03.902375   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:03.902386   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:03.951468   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:03.951499   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:03.965789   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:03.965816   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:04.035096   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:04.035119   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:04.035193   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:04.115842   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:04.115876   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:06.662652   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:06.676508   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:06.676583   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:06.713058   67607 cri.go:89] found id: ""
	I0829 20:29:06.713084   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.713093   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:06.713101   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:06.713171   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:06.747513   67607 cri.go:89] found id: ""
	I0829 20:29:06.747544   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.747552   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:06.747557   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:06.747617   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:06.782662   67607 cri.go:89] found id: ""
	I0829 20:29:06.782689   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.782695   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:06.782701   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:06.782758   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:06.818472   67607 cri.go:89] found id: ""
	I0829 20:29:06.818500   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.818510   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:06.818516   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:06.818586   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:06.852928   67607 cri.go:89] found id: ""
	I0829 20:29:06.852954   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.852964   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:06.852974   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:06.853032   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:06.893859   67607 cri.go:89] found id: ""
	I0829 20:29:06.893889   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.893899   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:06.893907   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:06.893969   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:06.931552   67607 cri.go:89] found id: ""
	I0829 20:29:06.931584   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.931594   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:06.931601   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:06.931662   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:06.967210   67607 cri.go:89] found id: ""
	I0829 20:29:06.967243   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.967254   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:06.967266   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:06.967279   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:07.020595   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:07.020631   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:07.034738   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:07.034764   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:07.103726   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:07.103747   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:07.103760   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:07.184727   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:07.184764   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:09.746639   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:09.761228   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:09.761308   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:09.802071   67607 cri.go:89] found id: ""
	I0829 20:29:09.802102   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.802113   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:09.802122   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:09.802180   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:09.837352   67607 cri.go:89] found id: ""
	I0829 20:29:09.837385   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.837395   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:09.837402   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:09.837464   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:09.874951   67607 cri.go:89] found id: ""
	I0829 20:29:09.874980   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.874992   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:09.874999   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:09.875055   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:09.909660   67607 cri.go:89] found id: ""
	I0829 20:29:09.909696   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.909706   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:09.909713   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:09.909777   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:09.949727   67607 cri.go:89] found id: ""
	I0829 20:29:09.949751   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.949759   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:09.949765   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:09.949825   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:09.984576   67607 cri.go:89] found id: ""
	I0829 20:29:09.984609   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.984617   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:09.984623   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:09.984675   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:10.022499   67607 cri.go:89] found id: ""
	I0829 20:29:10.022523   67607 logs.go:276] 0 containers: []
	W0829 20:29:10.022530   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:10.022553   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:10.022624   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:10.064308   67607 cri.go:89] found id: ""
	I0829 20:29:10.064346   67607 logs.go:276] 0 containers: []
	W0829 20:29:10.064356   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:10.064367   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:10.064382   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:10.113505   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:10.113537   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:10.127614   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:10.127640   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:10.200558   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:10.200579   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:10.200592   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:10.292984   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:10.293020   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:12.833100   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:12.846645   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:12.846712   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:12.885396   67607 cri.go:89] found id: ""
	I0829 20:29:12.885423   67607 logs.go:276] 0 containers: []
	W0829 20:29:12.885430   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:12.885436   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:12.885486   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:12.922556   67607 cri.go:89] found id: ""
	I0829 20:29:12.922584   67607 logs.go:276] 0 containers: []
	W0829 20:29:12.922595   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:12.922602   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:12.922688   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:12.965294   67607 cri.go:89] found id: ""
	I0829 20:29:12.965324   67607 logs.go:276] 0 containers: []
	W0829 20:29:12.965335   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:12.965342   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:12.965401   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:13.022911   67607 cri.go:89] found id: ""
	I0829 20:29:13.022934   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.022942   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:13.022948   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:13.023009   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:13.077009   67607 cri.go:89] found id: ""
	I0829 20:29:13.077035   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.077043   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:13.077048   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:13.077095   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:13.114202   67607 cri.go:89] found id: ""
	I0829 20:29:13.114233   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.114243   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:13.114251   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:13.114315   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:13.147025   67607 cri.go:89] found id: ""
	I0829 20:29:13.147049   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.147057   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:13.147063   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:13.147110   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:13.183112   67607 cri.go:89] found id: ""
	I0829 20:29:13.183138   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.183148   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:13.183159   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:13.183173   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:13.240558   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:13.240595   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:13.255563   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:13.255589   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:13.322826   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:13.322846   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:13.322857   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:13.399330   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:13.399365   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:15.938467   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:15.951742   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:15.951812   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:15.987492   67607 cri.go:89] found id: ""
	I0829 20:29:15.987517   67607 logs.go:276] 0 containers: []
	W0829 20:29:15.987524   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:15.987530   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:15.987575   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:16.024187   67607 cri.go:89] found id: ""
	I0829 20:29:16.024214   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.024223   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:16.024231   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:16.024291   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:16.058141   67607 cri.go:89] found id: ""
	I0829 20:29:16.058164   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.058171   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:16.058176   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:16.058225   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:16.092390   67607 cri.go:89] found id: ""
	I0829 20:29:16.092414   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.092421   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:16.092427   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:16.092472   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:16.130178   67607 cri.go:89] found id: ""
	I0829 20:29:16.130209   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.130219   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:16.130227   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:16.130289   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:16.163867   67607 cri.go:89] found id: ""
	I0829 20:29:16.163900   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.163907   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:16.163913   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:16.163964   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:16.197764   67607 cri.go:89] found id: ""
	I0829 20:29:16.197792   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.197798   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:16.197804   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:16.197850   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:16.233357   67607 cri.go:89] found id: ""
	I0829 20:29:16.233383   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.233393   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:16.233403   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:16.233418   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:16.285154   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:16.285188   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:16.299057   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:16.299085   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:16.377021   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:16.377041   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:16.377062   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:16.457750   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:16.457796   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:18.999133   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:19.016143   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:19.016223   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:19.049225   67607 cri.go:89] found id: ""
	I0829 20:29:19.049252   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.049259   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:19.049265   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:19.049317   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:19.085237   67607 cri.go:89] found id: ""
	I0829 20:29:19.085297   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.085314   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:19.085325   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:19.085389   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:19.123476   67607 cri.go:89] found id: ""
	I0829 20:29:19.123501   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.123509   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:19.123514   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:19.123571   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:19.159958   67607 cri.go:89] found id: ""
	I0829 20:29:19.159984   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.159993   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:19.160001   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:19.160055   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:19.192385   67607 cri.go:89] found id: ""
	I0829 20:29:19.192410   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.192418   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:19.192423   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:19.192483   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:19.230781   67607 cri.go:89] found id: ""
	I0829 20:29:19.230804   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.230811   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:19.230816   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:19.230868   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:19.264925   67607 cri.go:89] found id: ""
	I0829 20:29:19.264954   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.264964   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:19.264972   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:19.265032   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:19.302461   67607 cri.go:89] found id: ""
	I0829 20:29:19.302484   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.302491   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:19.302499   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:19.302510   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:19.384799   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:19.384833   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:19.425281   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:19.425313   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:19.477380   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:19.477412   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:19.492315   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:19.492350   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:19.563428   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:22.064407   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:22.078609   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:22.078670   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:22.112630   67607 cri.go:89] found id: ""
	I0829 20:29:22.112662   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.112672   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:22.112680   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:22.112741   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:22.149078   67607 cri.go:89] found id: ""
	I0829 20:29:22.149108   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.149117   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:22.149124   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:22.149186   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:22.184568   67607 cri.go:89] found id: ""
	I0829 20:29:22.184596   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.184605   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:22.184613   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:22.184682   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:22.220881   67607 cri.go:89] found id: ""
	I0829 20:29:22.220908   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.220919   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:22.220926   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:22.220987   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:22.256280   67607 cri.go:89] found id: ""
	I0829 20:29:22.256305   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.256314   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:22.256321   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:22.256386   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:22.294546   67607 cri.go:89] found id: ""
	I0829 20:29:22.294580   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.294590   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:22.294597   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:22.294660   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:22.332178   67607 cri.go:89] found id: ""
	I0829 20:29:22.332207   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.332215   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:22.332220   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:22.332266   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:22.368283   67607 cri.go:89] found id: ""
	I0829 20:29:22.368309   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.368317   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:22.368325   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:22.368336   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:22.421800   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:22.421836   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:22.435539   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:22.435565   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:22.504402   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:22.504427   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:22.504441   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:22.588293   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:22.588326   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:25.130766   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:25.144479   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:25.144554   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:25.181606   67607 cri.go:89] found id: ""
	I0829 20:29:25.181636   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.181643   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:25.181649   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:25.181697   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:25.220291   67607 cri.go:89] found id: ""
	I0829 20:29:25.220320   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.220328   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:25.220335   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:25.220447   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:25.260947   67607 cri.go:89] found id: ""
	I0829 20:29:25.260975   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.260983   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:25.260988   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:25.261035   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:25.298200   67607 cri.go:89] found id: ""
	I0829 20:29:25.298232   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.298243   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:25.298256   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:25.298314   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:25.333128   67607 cri.go:89] found id: ""
	I0829 20:29:25.333162   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.333174   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:25.333181   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:25.333232   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:25.368951   67607 cri.go:89] found id: ""
	I0829 20:29:25.368979   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.368989   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:25.368997   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:25.369052   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:25.403687   67607 cri.go:89] found id: ""
	I0829 20:29:25.403715   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.403726   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:25.403734   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:25.403799   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:25.442338   67607 cri.go:89] found id: ""
	I0829 20:29:25.442365   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.442372   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:25.442381   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:25.442395   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:25.456313   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:25.456335   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:25.528709   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:25.528730   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:25.528744   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:25.609976   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:25.610011   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:25.650044   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:25.650071   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:28.202683   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:28.216971   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:28.217046   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:28.256297   67607 cri.go:89] found id: ""
	I0829 20:29:28.256321   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.256329   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:28.256335   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:28.256379   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:28.289396   67607 cri.go:89] found id: ""
	I0829 20:29:28.289420   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.289427   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:28.289433   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:28.289484   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:28.323589   67607 cri.go:89] found id: ""
	I0829 20:29:28.323616   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.323623   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:28.323630   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:28.323676   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:28.362423   67607 cri.go:89] found id: ""
	I0829 20:29:28.362453   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.362463   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:28.362471   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:28.362531   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:28.396967   67607 cri.go:89] found id: ""
	I0829 20:29:28.396990   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.396998   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:28.397003   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:28.397053   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:28.430714   67607 cri.go:89] found id: ""
	I0829 20:29:28.430744   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.430755   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:28.430762   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:28.430831   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:28.468668   67607 cri.go:89] found id: ""
	I0829 20:29:28.468696   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.468707   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:28.468714   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:28.468777   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:28.506678   67607 cri.go:89] found id: ""
	I0829 20:29:28.506705   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.506716   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:28.506727   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:28.506741   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:28.545259   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:28.545287   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:28.598249   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:28.598285   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:28.612385   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:28.612429   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:28.685765   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:28.685792   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:28.685806   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:31.270074   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:31.284357   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:31.284417   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:31.319530   67607 cri.go:89] found id: ""
	I0829 20:29:31.319558   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.319566   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:31.319571   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:31.319640   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:31.356826   67607 cri.go:89] found id: ""
	I0829 20:29:31.356856   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.356867   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:31.356880   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:31.356934   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:31.390137   67607 cri.go:89] found id: ""
	I0829 20:29:31.390160   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.390167   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:31.390173   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:31.390219   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:31.424939   67607 cri.go:89] found id: ""
	I0829 20:29:31.424972   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.424989   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:31.424997   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:31.425054   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:31.460896   67607 cri.go:89] found id: ""
	I0829 20:29:31.460921   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.460928   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:31.460935   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:31.460985   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:31.498933   67607 cri.go:89] found id: ""
	I0829 20:29:31.498957   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.498967   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:31.498975   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:31.499044   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:31.534953   67607 cri.go:89] found id: ""
	I0829 20:29:31.534985   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.534996   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:31.535003   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:31.535065   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:31.576248   67607 cri.go:89] found id: ""
	I0829 20:29:31.576273   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.576281   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:31.576291   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:31.576307   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:31.628157   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:31.628196   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:31.641564   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:31.641591   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:31.719949   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:31.719973   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:31.719996   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:31.795682   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:31.795716   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:34.333468   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:34.347294   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:34.347370   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:34.384885   67607 cri.go:89] found id: ""
	I0829 20:29:34.384910   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.384921   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:34.384928   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:34.384991   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:34.422309   67607 cri.go:89] found id: ""
	I0829 20:29:34.422341   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.422351   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:34.422358   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:34.422417   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:34.459800   67607 cri.go:89] found id: ""
	I0829 20:29:34.459826   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.459834   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:34.459840   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:34.459905   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:34.495600   67607 cri.go:89] found id: ""
	I0829 20:29:34.495624   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.495633   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:34.495647   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:34.495708   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:34.531749   67607 cri.go:89] found id: ""
	I0829 20:29:34.531777   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.531788   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:34.531795   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:34.531856   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:34.571057   67607 cri.go:89] found id: ""
	I0829 20:29:34.571088   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.571098   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:34.571105   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:34.571168   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:34.609645   67607 cri.go:89] found id: ""
	I0829 20:29:34.609676   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.609687   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:34.609695   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:34.609753   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:34.647199   67607 cri.go:89] found id: ""
	I0829 20:29:34.647233   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.647244   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:34.647255   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:34.647269   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:34.661390   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:34.661420   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:34.737590   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:34.737613   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:34.737625   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:34.820682   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:34.820721   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:34.861697   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:34.861723   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:37.412384   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:37.426081   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:37.426162   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:37.461302   67607 cri.go:89] found id: ""
	I0829 20:29:37.461332   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.461342   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:37.461349   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:37.461416   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:37.500869   67607 cri.go:89] found id: ""
	I0829 20:29:37.500898   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.500908   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:37.500915   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:37.500970   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:37.536908   67607 cri.go:89] found id: ""
	I0829 20:29:37.536932   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.536942   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:37.536949   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:37.537010   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:37.571939   67607 cri.go:89] found id: ""
	I0829 20:29:37.571969   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.571979   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:37.571987   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:37.572048   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:37.607834   67607 cri.go:89] found id: ""
	I0829 20:29:37.607864   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.607883   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:37.607891   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:37.607952   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:37.643932   67607 cri.go:89] found id: ""
	I0829 20:29:37.643963   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.643971   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:37.643978   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:37.644037   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:37.678148   67607 cri.go:89] found id: ""
	I0829 20:29:37.678177   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.678188   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:37.678195   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:37.678257   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:37.713170   67607 cri.go:89] found id: ""
	I0829 20:29:37.713195   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.713209   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:37.713219   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:37.713233   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:37.752538   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:37.752567   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:37.802888   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:37.802923   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:37.816546   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:37.816585   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:37.891647   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:37.891667   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:37.891680   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:40.472354   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:40.486186   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:40.486252   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:40.520935   67607 cri.go:89] found id: ""
	I0829 20:29:40.520963   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.520971   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:40.520977   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:40.521037   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:40.561399   67607 cri.go:89] found id: ""
	I0829 20:29:40.561428   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.561440   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:40.561447   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:40.561514   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:40.601821   67607 cri.go:89] found id: ""
	I0829 20:29:40.601846   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.601855   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:40.601862   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:40.601918   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:40.636429   67607 cri.go:89] found id: ""
	I0829 20:29:40.636454   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.636462   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:40.636468   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:40.636525   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:40.670781   67607 cri.go:89] found id: ""
	I0829 20:29:40.670816   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.670828   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:40.670836   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:40.670912   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:40.706635   67607 cri.go:89] found id: ""
	I0829 20:29:40.706663   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.706674   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:40.706682   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:40.706739   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:40.741657   67607 cri.go:89] found id: ""
	I0829 20:29:40.741687   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.741695   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:40.741707   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:40.741770   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:40.777028   67607 cri.go:89] found id: ""
	I0829 20:29:40.777057   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.777066   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:40.777077   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:40.777093   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:40.829387   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:40.829424   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:40.843928   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:40.843956   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:40.917965   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:40.917992   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:40.918008   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:41.001880   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:41.001925   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:43.549007   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:43.563446   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:43.563502   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:43.598503   67607 cri.go:89] found id: ""
	I0829 20:29:43.598548   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.598557   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:43.598564   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:43.598614   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:43.634169   67607 cri.go:89] found id: ""
	I0829 20:29:43.634200   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.634210   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:43.634218   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:43.634280   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:43.670467   67607 cri.go:89] found id: ""
	I0829 20:29:43.670492   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.670500   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:43.670506   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:43.670580   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:43.706812   67607 cri.go:89] found id: ""
	I0829 20:29:43.706839   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.706849   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:43.706857   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:43.706922   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:43.741577   67607 cri.go:89] found id: ""
	I0829 20:29:43.741606   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.741612   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:43.741620   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:43.741700   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:43.776552   67607 cri.go:89] found id: ""
	I0829 20:29:43.776595   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.776625   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:43.776635   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:43.776701   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:43.816229   67607 cri.go:89] found id: ""
	I0829 20:29:43.816264   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.816274   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:43.816281   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:43.816346   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:43.860726   67607 cri.go:89] found id: ""
	I0829 20:29:43.860753   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.860761   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:43.860768   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:43.860783   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:43.874311   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:43.874340   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:43.952243   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:43.952272   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:43.952288   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:44.032276   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:44.032312   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:44.075537   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:44.075571   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:46.632798   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:46.645878   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:46.645948   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:46.683682   67607 cri.go:89] found id: ""
	I0829 20:29:46.683711   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.683720   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:46.683726   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:46.683775   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:46.727985   67607 cri.go:89] found id: ""
	I0829 20:29:46.728012   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.728024   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:46.728031   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:46.728090   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:46.762142   67607 cri.go:89] found id: ""
	I0829 20:29:46.762166   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.762174   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:46.762180   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:46.762226   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:46.802423   67607 cri.go:89] found id: ""
	I0829 20:29:46.802453   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.802464   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:46.802471   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:46.802515   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:46.840382   67607 cri.go:89] found id: ""
	I0829 20:29:46.840411   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.840418   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:46.840425   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:46.840473   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:46.878438   67607 cri.go:89] found id: ""
	I0829 20:29:46.878466   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.878476   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:46.878483   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:46.878562   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:46.913589   67607 cri.go:89] found id: ""
	I0829 20:29:46.913618   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.913625   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:46.913631   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:46.913678   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:46.948894   67607 cri.go:89] found id: ""
	I0829 20:29:46.948922   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.948929   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:46.948938   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:46.948949   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:47.005709   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:47.005745   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:47.030316   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:47.030343   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:47.105899   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:47.105920   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:47.105932   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:47.189405   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:47.189442   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:49.727745   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:49.742061   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:49.742131   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:49.777428   67607 cri.go:89] found id: ""
	I0829 20:29:49.777456   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.777464   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:49.777471   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:49.777531   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:49.811611   67607 cri.go:89] found id: ""
	I0829 20:29:49.811639   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.811646   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:49.811653   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:49.811709   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:49.844962   67607 cri.go:89] found id: ""
	I0829 20:29:49.844987   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.844995   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:49.845006   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:49.845062   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:49.880259   67607 cri.go:89] found id: ""
	I0829 20:29:49.880286   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.880297   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:49.880305   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:49.880366   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:49.915889   67607 cri.go:89] found id: ""
	I0829 20:29:49.915918   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.915926   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:49.915932   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:49.915988   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:49.953146   67607 cri.go:89] found id: ""
	I0829 20:29:49.953174   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.953182   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:49.953189   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:49.953240   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:49.990689   67607 cri.go:89] found id: ""
	I0829 20:29:49.990721   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.990730   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:49.990738   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:49.990792   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:50.024775   67607 cri.go:89] found id: ""
	I0829 20:29:50.024806   67607 logs.go:276] 0 containers: []
	W0829 20:29:50.024817   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:50.024827   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:50.024842   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:50.079030   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:50.079064   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:50.093178   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:50.093205   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:50.171476   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:50.171499   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:50.171512   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:50.252913   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:50.252946   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:52.799818   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:52.812857   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:52.812930   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:52.850736   67607 cri.go:89] found id: ""
	I0829 20:29:52.850761   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.850770   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:52.850777   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:52.850834   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:52.888892   67607 cri.go:89] found id: ""
	I0829 20:29:52.888916   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.888923   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:52.888929   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:52.888975   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:52.925390   67607 cri.go:89] found id: ""
	I0829 20:29:52.925418   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.925428   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:52.925435   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:52.925501   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:52.960329   67607 cri.go:89] found id: ""
	I0829 20:29:52.960352   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.960360   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:52.960366   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:52.960413   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:52.994899   67607 cri.go:89] found id: ""
	I0829 20:29:52.994927   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.994935   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:52.994941   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:52.994995   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:53.033028   67607 cri.go:89] found id: ""
	I0829 20:29:53.033057   67607 logs.go:276] 0 containers: []
	W0829 20:29:53.033068   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:53.033076   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:53.033136   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:53.068353   67607 cri.go:89] found id: ""
	I0829 20:29:53.068381   67607 logs.go:276] 0 containers: []
	W0829 20:29:53.068389   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:53.068394   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:53.068441   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:53.104496   67607 cri.go:89] found id: ""
	I0829 20:29:53.104524   67607 logs.go:276] 0 containers: []
	W0829 20:29:53.104534   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:53.104545   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:53.104560   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:53.175777   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:53.175810   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:53.175827   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:53.257362   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:53.257396   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:53.295822   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:53.295850   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:53.351237   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:53.351263   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:55.864680   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:55.879324   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:55.879391   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:55.914454   67607 cri.go:89] found id: ""
	I0829 20:29:55.914479   67607 logs.go:276] 0 containers: []
	W0829 20:29:55.914490   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:55.914498   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:55.914592   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:55.953778   67607 cri.go:89] found id: ""
	I0829 20:29:55.953804   67607 logs.go:276] 0 containers: []
	W0829 20:29:55.953814   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:55.953821   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:55.953883   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:55.994659   67607 cri.go:89] found id: ""
	I0829 20:29:55.994681   67607 logs.go:276] 0 containers: []
	W0829 20:29:55.994689   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:55.994697   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:55.994768   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:56.031262   67607 cri.go:89] found id: ""
	I0829 20:29:56.031288   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.031299   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:56.031306   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:56.031366   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:56.063748   67607 cri.go:89] found id: ""
	I0829 20:29:56.063776   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.063785   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:56.063793   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:56.063883   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:56.098024   67607 cri.go:89] found id: ""
	I0829 20:29:56.098060   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.098068   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:56.098074   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:56.098127   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:56.141340   67607 cri.go:89] found id: ""
	I0829 20:29:56.141364   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.141374   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:56.141381   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:56.141440   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:56.176668   67607 cri.go:89] found id: ""
	I0829 20:29:56.176696   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.176707   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:56.176717   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:56.176731   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:56.216294   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:56.216322   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:56.269404   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:56.269440   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:56.283134   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:56.283160   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:56.355005   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:56.355023   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:56.355035   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:58.937406   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:58.950924   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:58.950981   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:58.986748   67607 cri.go:89] found id: ""
	I0829 20:29:58.986778   67607 logs.go:276] 0 containers: []
	W0829 20:29:58.986788   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:58.986795   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:58.986861   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:59.023737   67607 cri.go:89] found id: ""
	I0829 20:29:59.023763   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.023773   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:59.023780   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:59.023840   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:59.060245   67607 cri.go:89] found id: ""
	I0829 20:29:59.060274   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.060284   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:59.060291   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:59.060352   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:59.102467   67607 cri.go:89] found id: ""
	I0829 20:29:59.102493   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.102501   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:59.102507   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:59.102581   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:59.142601   67607 cri.go:89] found id: ""
	I0829 20:29:59.142625   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.142634   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:59.142647   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:59.142717   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:59.186683   67607 cri.go:89] found id: ""
	I0829 20:29:59.186707   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.186715   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:59.186723   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:59.186783   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:59.232104   67607 cri.go:89] found id: ""
	I0829 20:29:59.232136   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.232154   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:59.232162   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:59.232227   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:59.276416   67607 cri.go:89] found id: ""
	I0829 20:29:59.276442   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.276452   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:59.276462   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:59.276479   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:59.341741   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:59.341779   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:59.357312   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:59.357336   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:59.425653   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:59.425674   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:59.425689   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:59.505365   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:59.505403   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:02.049195   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:02.064558   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:02.064641   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:02.102141   67607 cri.go:89] found id: ""
	I0829 20:30:02.102188   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.102209   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:02.102217   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:02.102282   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:02.138610   67607 cri.go:89] found id: ""
	I0829 20:30:02.138640   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.138650   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:02.138658   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:02.138724   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:02.175391   67607 cri.go:89] found id: ""
	I0829 20:30:02.175423   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.175435   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:02.175442   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:02.175505   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:02.212956   67607 cri.go:89] found id: ""
	I0829 20:30:02.212981   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.212991   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:02.212998   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:02.213059   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:02.254444   67607 cri.go:89] found id: ""
	I0829 20:30:02.254467   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.254475   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:02.254481   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:02.254568   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:02.293232   67607 cri.go:89] found id: ""
	I0829 20:30:02.293260   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.293270   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:02.293277   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:02.293348   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:02.328300   67607 cri.go:89] found id: ""
	I0829 20:30:02.328329   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.328339   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:02.328346   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:02.328407   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:02.363467   67607 cri.go:89] found id: ""
	I0829 20:30:02.363495   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.363505   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:02.363514   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:02.363528   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:02.414357   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:02.414394   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:02.428229   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:02.428259   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:02.503640   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:02.503661   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:02.503674   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:02.584052   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:02.584087   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:05.124345   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:05.143530   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:05.143594   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:05.195985   67607 cri.go:89] found id: ""
	I0829 20:30:05.196014   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.196024   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:05.196032   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:05.196092   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:05.254315   67607 cri.go:89] found id: ""
	I0829 20:30:05.254343   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.254354   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:05.254362   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:05.254432   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:05.306756   67607 cri.go:89] found id: ""
	I0829 20:30:05.306781   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.306788   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:05.306794   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:05.306852   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:05.345200   67607 cri.go:89] found id: ""
	I0829 20:30:05.345225   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.345235   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:05.345242   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:05.345297   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:05.384038   67607 cri.go:89] found id: ""
	I0829 20:30:05.384064   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.384074   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:05.384081   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:05.384140   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:05.420177   67607 cri.go:89] found id: ""
	I0829 20:30:05.420201   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.420208   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:05.420214   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:05.420260   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:05.453492   67607 cri.go:89] found id: ""
	I0829 20:30:05.453513   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.453521   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:05.453526   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:05.453573   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:05.491591   67607 cri.go:89] found id: ""
	I0829 20:30:05.491618   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.491628   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:05.491638   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:05.491701   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:05.580458   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:05.580503   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:05.620137   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:05.620169   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:05.672137   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:05.672177   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:05.685946   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:05.685973   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:05.755176   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:08.256255   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:08.269099   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:08.269160   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:08.302552   67607 cri.go:89] found id: ""
	I0829 20:30:08.302578   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.302585   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:08.302591   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:08.302639   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:08.340683   67607 cri.go:89] found id: ""
	I0829 20:30:08.340711   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.340718   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:08.340726   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:08.340778   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:08.387389   67607 cri.go:89] found id: ""
	I0829 20:30:08.387416   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.387424   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:08.387430   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:08.387477   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:08.421303   67607 cri.go:89] found id: ""
	I0829 20:30:08.421330   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.421340   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:08.421348   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:08.421409   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:08.458648   67607 cri.go:89] found id: ""
	I0829 20:30:08.458677   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.458688   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:08.458695   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:08.458758   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:08.498748   67607 cri.go:89] found id: ""
	I0829 20:30:08.498776   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.498784   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:08.498790   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:08.498845   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:08.536859   67607 cri.go:89] found id: ""
	I0829 20:30:08.536889   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.536896   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:08.536902   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:08.536963   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:08.570685   67607 cri.go:89] found id: ""
	I0829 20:30:08.570713   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.570723   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:08.570734   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:08.570748   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:08.621904   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:08.621938   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:08.636367   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:08.636391   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:08.703796   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:08.703824   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:08.703838   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:08.785084   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:08.785120   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:11.326633   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:11.339570   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:11.339637   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:11.374132   67607 cri.go:89] found id: ""
	I0829 20:30:11.374155   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.374163   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:11.374169   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:11.374234   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:11.409004   67607 cri.go:89] found id: ""
	I0829 20:30:11.409036   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.409047   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:11.409054   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:11.409119   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:11.444598   67607 cri.go:89] found id: ""
	I0829 20:30:11.444625   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.444635   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:11.444643   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:11.444704   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:11.481912   67607 cri.go:89] found id: ""
	I0829 20:30:11.481942   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.481953   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:11.481961   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:11.482025   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:11.516436   67607 cri.go:89] found id: ""
	I0829 20:30:11.516466   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.516477   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:11.516483   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:11.516536   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:11.554762   67607 cri.go:89] found id: ""
	I0829 20:30:11.554787   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.554795   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:11.554801   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:11.554857   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:11.588902   67607 cri.go:89] found id: ""
	I0829 20:30:11.588931   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.588942   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:11.588950   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:11.589011   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:11.621346   67607 cri.go:89] found id: ""
	I0829 20:30:11.621368   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.621376   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:11.621383   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:11.621395   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:11.659671   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:11.659703   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:11.711288   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:11.711315   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:11.725285   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:11.725310   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:11.801713   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:11.801735   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:11.801750   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:14.382313   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:14.395852   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:14.395926   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:14.438735   67607 cri.go:89] found id: ""
	I0829 20:30:14.438762   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.438772   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:14.438778   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:14.438840   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:14.477886   67607 cri.go:89] found id: ""
	I0829 20:30:14.477928   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.477937   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:14.477943   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:14.478000   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:14.517627   67607 cri.go:89] found id: ""
	I0829 20:30:14.517654   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.517664   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:14.517670   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:14.517734   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:14.557247   67607 cri.go:89] found id: ""
	I0829 20:30:14.557272   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.557280   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:14.557286   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:14.557345   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:14.591364   67607 cri.go:89] found id: ""
	I0829 20:30:14.591388   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.591398   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:14.591406   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:14.591468   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:14.627517   67607 cri.go:89] found id: ""
	I0829 20:30:14.627539   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.627546   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:14.627551   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:14.627604   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:14.662388   67607 cri.go:89] found id: ""
	I0829 20:30:14.662409   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.662419   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:14.662432   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:14.662488   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:14.695277   67607 cri.go:89] found id: ""
	I0829 20:30:14.695307   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.695316   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:14.695324   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:14.695335   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:14.735824   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:14.735852   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:14.792607   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:14.792642   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:14.808881   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:14.808910   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:14.879804   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:14.879824   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:14.879837   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:17.459817   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:17.474813   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:17.474887   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:17.509885   67607 cri.go:89] found id: ""
	I0829 20:30:17.509913   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.509923   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:17.509930   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:17.509987   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:17.543931   67607 cri.go:89] found id: ""
	I0829 20:30:17.543959   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.543968   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:17.543973   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:17.544021   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:17.580944   67607 cri.go:89] found id: ""
	I0829 20:30:17.580972   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.580980   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:17.580986   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:17.581033   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:17.620061   67607 cri.go:89] found id: ""
	I0829 20:30:17.620088   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.620097   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:17.620103   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:17.620148   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:17.658675   67607 cri.go:89] found id: ""
	I0829 20:30:17.658706   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.658717   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:17.658724   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:17.658788   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:17.694424   67607 cri.go:89] found id: ""
	I0829 20:30:17.694453   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.694462   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:17.694467   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:17.694571   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:17.727425   67607 cri.go:89] found id: ""
	I0829 20:30:17.727450   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.727456   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:17.727462   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:17.727510   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:17.767915   67607 cri.go:89] found id: ""
	I0829 20:30:17.767946   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.767956   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:17.767965   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:17.767977   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:17.837556   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:17.837580   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:17.837593   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:17.921601   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:17.921638   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:17.960999   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:17.961026   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:18.013654   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:18.013691   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:20.528244   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:20.542116   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:20.542190   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:20.578905   67607 cri.go:89] found id: ""
	I0829 20:30:20.578936   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.578947   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:20.578954   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:20.579003   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:20.613543   67607 cri.go:89] found id: ""
	I0829 20:30:20.613567   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.613574   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:20.613579   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:20.613627   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:20.649322   67607 cri.go:89] found id: ""
	I0829 20:30:20.649344   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.649352   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:20.649366   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:20.649429   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:20.684851   67607 cri.go:89] found id: ""
	I0829 20:30:20.684878   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.684886   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:20.684892   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:20.684950   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:20.722016   67607 cri.go:89] found id: ""
	I0829 20:30:20.722045   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.722054   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:20.722062   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:20.722125   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:20.757594   67607 cri.go:89] found id: ""
	I0829 20:30:20.757626   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.757637   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:20.757644   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:20.757707   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:20.793694   67607 cri.go:89] found id: ""
	I0829 20:30:20.793728   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.793738   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:20.793746   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:20.793812   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:20.829709   67607 cri.go:89] found id: ""
	I0829 20:30:20.829736   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.829747   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:20.829758   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:20.829782   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:20.888838   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:20.888888   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:20.903530   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:20.903556   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:20.972460   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:20.972488   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:20.972503   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:21.055556   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:21.055593   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:23.597355   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:23.611091   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:23.611162   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:23.649469   67607 cri.go:89] found id: ""
	I0829 20:30:23.649493   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.649501   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:23.649510   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:23.649562   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:23.684530   67607 cri.go:89] found id: ""
	I0829 20:30:23.684554   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.684561   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:23.684571   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:23.684625   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:23.720466   67607 cri.go:89] found id: ""
	I0829 20:30:23.720493   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.720503   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:23.720510   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:23.720563   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:23.755013   67607 cri.go:89] found id: ""
	I0829 20:30:23.755042   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.755053   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:23.755061   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:23.755127   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:23.795212   67607 cri.go:89] found id: ""
	I0829 20:30:23.795243   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.795254   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:23.795263   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:23.795320   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:23.832912   67607 cri.go:89] found id: ""
	I0829 20:30:23.832941   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.832951   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:23.832959   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:23.833015   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:23.869896   67607 cri.go:89] found id: ""
	I0829 20:30:23.869930   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.869939   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:23.869947   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:23.870011   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:23.908111   67607 cri.go:89] found id: ""
	I0829 20:30:23.908136   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.908145   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:23.908155   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:23.908170   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:23.988489   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:23.988510   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:23.988525   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:24.063246   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:24.063280   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:24.102943   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:24.102974   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:24.157255   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:24.157294   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:26.671966   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:26.684755   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:26.684830   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:26.721125   67607 cri.go:89] found id: ""
	I0829 20:30:26.721150   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.721158   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:26.721164   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:26.721219   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:26.756328   67607 cri.go:89] found id: ""
	I0829 20:30:26.756349   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.756356   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:26.756362   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:26.756420   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:26.791711   67607 cri.go:89] found id: ""
	I0829 20:30:26.791751   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.791763   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:26.791774   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:26.791857   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:26.827215   67607 cri.go:89] found id: ""
	I0829 20:30:26.827244   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.827254   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:26.827261   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:26.827321   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:26.863461   67607 cri.go:89] found id: ""
	I0829 20:30:26.863486   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.863497   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:26.863505   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:26.863569   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:26.900037   67607 cri.go:89] found id: ""
	I0829 20:30:26.900065   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.900075   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:26.900083   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:26.900139   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:26.937236   67607 cri.go:89] found id: ""
	I0829 20:30:26.937263   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.937274   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:26.937282   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:26.937340   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:26.970281   67607 cri.go:89] found id: ""
	I0829 20:30:26.970312   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.970322   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:26.970332   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:26.970345   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:27.041485   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:27.041511   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:27.041526   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:27.120774   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:27.120807   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:27.159656   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:27.159685   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:27.213322   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:27.213356   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:29.729066   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:29.742044   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:29.742099   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:29.777426   67607 cri.go:89] found id: ""
	I0829 20:30:29.777454   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.777462   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:29.777468   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:29.777529   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:29.814353   67607 cri.go:89] found id: ""
	I0829 20:30:29.814381   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.814392   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:29.814401   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:29.814462   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:29.853754   67607 cri.go:89] found id: ""
	I0829 20:30:29.853783   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.853793   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:29.853801   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:29.853869   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:29.893966   67607 cri.go:89] found id: ""
	I0829 20:30:29.893991   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.893998   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:29.894003   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:29.894057   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:29.929452   67607 cri.go:89] found id: ""
	I0829 20:30:29.929483   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.929492   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:29.929502   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:29.929561   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:29.965880   67607 cri.go:89] found id: ""
	I0829 20:30:29.965906   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.965916   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:29.965924   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:29.965986   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:30.002192   67607 cri.go:89] found id: ""
	I0829 20:30:30.002226   67607 logs.go:276] 0 containers: []
	W0829 20:30:30.002237   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:30.002245   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:30.002320   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:30.037603   67607 cri.go:89] found id: ""
	I0829 20:30:30.037640   67607 logs.go:276] 0 containers: []
	W0829 20:30:30.037651   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:30.037662   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:30.037677   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:30.094128   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:30.094168   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:30.110667   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:30.110701   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:30.188355   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:30.188375   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:30.188388   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:30.270750   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:30.270785   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:32.809472   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:32.823099   67607 kubeadm.go:597] duration metric: took 4m3.15684598s to restartPrimaryControlPlane
	W0829 20:30:32.823188   67607 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 20:30:32.823224   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:30:33.322987   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:30:33.338134   67607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:30:33.348586   67607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:30:33.358672   67607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:30:33.358692   67607 kubeadm.go:157] found existing configuration files:
	
	I0829 20:30:33.358748   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:30:33.367955   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:30:33.368000   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:30:33.377565   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:30:33.386317   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:30:33.386377   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:30:33.396356   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:30:33.406228   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:30:33.406281   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:30:33.418323   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:30:33.427595   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:30:33.427657   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:30:33.437520   67607 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:30:33.511159   67607 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 20:30:33.511279   67607 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:30:33.669988   67607 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:30:33.670133   67607 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:30:33.670267   67607 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 20:30:33.859908   67607 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:30:33.861742   67607 out.go:235]   - Generating certificates and keys ...
	I0829 20:30:33.861849   67607 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:30:33.861946   67607 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:30:33.862075   67607 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:30:33.862174   67607 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:30:33.862276   67607 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:30:33.862366   67607 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:30:33.862467   67607 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:30:33.862573   67607 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:30:33.862794   67607 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:30:33.863226   67607 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:30:33.863323   67607 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:30:33.863417   67607 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:30:34.065914   67607 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:30:34.235581   67607 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:30:34.660452   67607 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:30:34.724718   67607 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:30:34.743897   67607 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:30:34.746263   67607 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:30:34.746369   67607 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:30:34.893824   67607 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:30:34.895805   67607 out.go:235]   - Booting up control plane ...
	I0829 20:30:34.895941   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:30:34.904294   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:30:34.915103   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:30:34.915744   67607 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:30:34.917923   67607 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 20:31:14.919490   67607 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 20:31:14.920124   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:14.920395   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:19.920740   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:19.920993   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:29.921355   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:29.921591   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:49.922318   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:49.922554   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:32:29.924469   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:32:29.924707   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:32:29.924729   67607 kubeadm.go:310] 
	I0829 20:32:29.924801   67607 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 20:32:29.924855   67607 kubeadm.go:310] 		timed out waiting for the condition
	I0829 20:32:29.924865   67607 kubeadm.go:310] 
	I0829 20:32:29.924912   67607 kubeadm.go:310] 	This error is likely caused by:
	I0829 20:32:29.924960   67607 kubeadm.go:310] 		- The kubelet is not running
	I0829 20:32:29.925080   67607 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 20:32:29.925090   67607 kubeadm.go:310] 
	I0829 20:32:29.925207   67607 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 20:32:29.925256   67607 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 20:32:29.925316   67607 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 20:32:29.925342   67607 kubeadm.go:310] 
	I0829 20:32:29.925493   67607 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 20:32:29.925616   67607 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 20:32:29.925627   67607 kubeadm.go:310] 
	I0829 20:32:29.925776   67607 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 20:32:29.925909   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 20:32:29.926016   67607 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 20:32:29.926134   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 20:32:29.926154   67607 kubeadm.go:310] 
	I0829 20:32:29.926605   67607 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:32:29.926723   67607 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 20:32:29.926812   67607 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0829 20:32:29.926935   67607 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0829 20:32:29.926979   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:32:30.389951   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:32:30.408455   67607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:32:30.418493   67607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:32:30.418513   67607 kubeadm.go:157] found existing configuration files:
	
	I0829 20:32:30.418582   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:32:30.427909   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:32:30.427957   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:32:30.437122   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:32:30.446157   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:32:30.446203   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:32:30.455480   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:32:30.464781   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:32:30.464834   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:32:30.474607   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:32:30.484537   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:32:30.484601   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:32:30.494170   67607 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:32:30.717349   67607 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:34:26.784436   67607 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 20:34:26.784518   67607 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0829 20:34:26.786158   67607 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 20:34:26.786196   67607 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:34:26.786276   67607 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:34:26.786353   67607 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:34:26.786437   67607 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 20:34:26.786486   67607 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:34:26.788271   67607 out.go:235]   - Generating certificates and keys ...
	I0829 20:34:26.788380   67607 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:34:26.788453   67607 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:34:26.788523   67607 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:34:26.788593   67607 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:34:26.788665   67607 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:34:26.788714   67607 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:34:26.788769   67607 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:34:26.788826   67607 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:34:26.788894   67607 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:34:26.788961   67607 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:34:26.788993   67607 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:34:26.789044   67607 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:34:26.789084   67607 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:34:26.789143   67607 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:34:26.789228   67607 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:34:26.789312   67607 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:34:26.789441   67607 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:34:26.789577   67607 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:34:26.789647   67607 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:34:26.789717   67607 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:34:26.791166   67607 out.go:235]   - Booting up control plane ...
	I0829 20:34:26.791239   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:34:26.791305   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:34:26.791382   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:34:26.791465   67607 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:34:26.791597   67607 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 20:34:26.791658   67607 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 20:34:26.791736   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.791926   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792008   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.792182   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792254   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.792435   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792492   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.792725   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792798   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.793026   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.793043   67607 kubeadm.go:310] 
	I0829 20:34:26.793091   67607 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 20:34:26.793148   67607 kubeadm.go:310] 		timed out waiting for the condition
	I0829 20:34:26.793159   67607 kubeadm.go:310] 
	I0829 20:34:26.793188   67607 kubeadm.go:310] 	This error is likely caused by:
	I0829 20:34:26.793219   67607 kubeadm.go:310] 		- The kubelet is not running
	I0829 20:34:26.793305   67607 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 20:34:26.793314   67607 kubeadm.go:310] 
	I0829 20:34:26.793438   67607 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 20:34:26.793483   67607 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 20:34:26.793515   67607 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 20:34:26.793522   67607 kubeadm.go:310] 
	I0829 20:34:26.793618   67607 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 20:34:26.793735   67607 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 20:34:26.793748   67607 kubeadm.go:310] 
	I0829 20:34:26.793895   67607 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 20:34:26.794020   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 20:34:26.794125   67607 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 20:34:26.794227   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 20:34:26.794285   67607 kubeadm.go:310] 
	I0829 20:34:26.794300   67607 kubeadm.go:394] duration metric: took 7m57.183485424s to StartCluster
	I0829 20:34:26.794357   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:34:26.794410   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:34:26.837033   67607 cri.go:89] found id: ""
	I0829 20:34:26.837072   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.837083   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:34:26.837091   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:34:26.837153   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:34:26.871177   67607 cri.go:89] found id: ""
	I0829 20:34:26.871203   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.871213   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:34:26.871220   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:34:26.871280   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:34:26.905409   67607 cri.go:89] found id: ""
	I0829 20:34:26.905432   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.905442   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:34:26.905450   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:34:26.905509   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:34:26.940119   67607 cri.go:89] found id: ""
	I0829 20:34:26.940150   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.940161   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:34:26.940169   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:34:26.940217   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:34:26.974555   67607 cri.go:89] found id: ""
	I0829 20:34:26.974589   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.974601   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:34:26.974608   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:34:26.974674   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:34:27.010586   67607 cri.go:89] found id: ""
	I0829 20:34:27.010616   67607 logs.go:276] 0 containers: []
	W0829 20:34:27.010631   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:34:27.010639   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:34:27.010704   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:34:27.044867   67607 cri.go:89] found id: ""
	I0829 20:34:27.044900   67607 logs.go:276] 0 containers: []
	W0829 20:34:27.044913   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:34:27.044921   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:34:27.044979   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:34:27.079282   67607 cri.go:89] found id: ""
	I0829 20:34:27.079308   67607 logs.go:276] 0 containers: []
	W0829 20:34:27.079316   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:34:27.079323   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:34:27.079335   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:34:27.093455   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:34:27.093485   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:34:27.179256   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:34:27.179280   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:34:27.179292   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:34:27.305873   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:34:27.305906   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:34:27.349676   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:34:27.349702   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 20:34:27.399787   67607 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0829 20:34:27.399851   67607 out.go:270] * 
	* 
	W0829 20:34:27.399907   67607 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 20:34:27.399919   67607 out.go:270] * 
	* 
	W0829 20:34:27.400631   67607 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 20:34:27.403773   67607 out.go:201] 
	W0829 20:34:27.404902   67607 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 20:34:27.404953   67607 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0829 20:34:27.404981   67607 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0829 20:34:27.406310   67607 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-032002 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-032002 -n old-k8s-version-032002
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-032002 -n old-k8s-version-032002: exit status 2 (234.521048ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-032002 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-032002 logs -n 25: (1.560354858s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-397724                                   | no-preload-397724            | jenkins | v1.33.1 | 29 Aug 24 20:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-388383            | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:18 UTC | 29 Aug 24 20:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-388383                                  | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-695305             | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:19 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-695305                  | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-695305 --memory=2200 --alsologtostderr   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:20 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-695305 image list                           | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	| delete  | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	| start   | -p                                                     | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:21 UTC |
	|         | default-k8s-diff-port-145096                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-032002        | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-397724                  | no-preload-397724            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-397724                                   | no-preload-397724            | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC | 29 Aug 24 20:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-388383                 | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-388383                                  | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC | 29 Aug 24 20:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-145096  | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC | 29 Aug 24 20:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC |                     |
	|         | default-k8s-diff-port-145096                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-032002                              | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:22 UTC | 29 Aug 24 20:22 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-032002             | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:22 UTC | 29 Aug 24 20:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-032002                              | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:22 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-145096       | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:24 UTC | 29 Aug 24 20:31 UTC |
	|         | default-k8s-diff-port-145096                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 20:24:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 20:24:16.618808   68084 out.go:345] Setting OutFile to fd 1 ...
	I0829 20:24:16.619043   68084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:24:16.619051   68084 out.go:358] Setting ErrFile to fd 2...
	I0829 20:24:16.619055   68084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:24:16.619206   68084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 20:24:16.619741   68084 out.go:352] Setting JSON to false
	I0829 20:24:16.620649   68084 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7604,"bootTime":1724955453,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 20:24:16.620702   68084 start.go:139] virtualization: kvm guest
	I0829 20:24:16.622891   68084 out.go:177] * [default-k8s-diff-port-145096] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 20:24:16.624228   68084 out.go:177]   - MINIKUBE_LOCATION=19530
	I0829 20:24:16.624256   68084 notify.go:220] Checking for updates...
	I0829 20:24:16.627123   68084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 20:24:16.628611   68084 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:24:16.629858   68084 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 20:24:16.631013   68084 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 20:24:16.632116   68084 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 20:24:16.633630   68084 config.go:182] Loaded profile config "default-k8s-diff-port-145096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:24:16.634042   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:24:16.634080   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:24:16.648879   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36381
	I0829 20:24:16.649315   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:24:16.649875   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:24:16.649893   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:24:16.650274   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:24:16.650504   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:24:16.650776   68084 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 20:24:16.651053   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:24:16.651111   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:24:16.665964   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33615
	I0829 20:24:16.666402   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:24:16.666918   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:24:16.666937   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:24:16.667250   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:24:16.667435   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:24:16.698712   68084 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 20:24:16.700010   68084 start.go:297] selected driver: kvm2
	I0829 20:24:16.700023   68084 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-145096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:24:16.700131   68084 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 20:24:16.700915   68084 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 20:24:16.700998   68084 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19530-11185/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 20:24:16.715940   68084 install.go:137] /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0829 20:24:16.716321   68084 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:24:16.716388   68084 cni.go:84] Creating CNI manager for ""
	I0829 20:24:16.716405   68084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:24:16.716452   68084 start.go:340] cluster config:
	{Name:default-k8s-diff-port-145096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:24:16.716563   68084 iso.go:125] acquiring lock: {Name:mk1c9d3ac7f423dd4657884e37bdf4359f6328d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 20:24:16.718175   68084 out.go:177] * Starting "default-k8s-diff-port-145096" primary control-plane node in "default-k8s-diff-port-145096" cluster
	I0829 20:24:16.258820   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:16.719204   68084 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:24:16.719231   68084 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 20:24:16.719237   68084 cache.go:56] Caching tarball of preloaded images
	I0829 20:24:16.719296   68084 preload.go:172] Found /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 20:24:16.719305   68084 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 20:24:16.719385   68084 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/config.json ...
	I0829 20:24:16.719549   68084 start.go:360] acquireMachinesLock for default-k8s-diff-port-145096: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 20:24:22.338805   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:25.410778   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:31.490844   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:34.562885   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:40.642793   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:43.714939   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:49.794765   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:52.866858   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:58.946771   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:02.018832   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:08.098829   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:11.170833   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:17.250794   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:20.322926   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:26.402827   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:29.474844   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:35.554771   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:38.626850   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:41.630257   66989 start.go:364] duration metric: took 4m26.950412835s to acquireMachinesLock for "embed-certs-388383"
	I0829 20:25:41.630308   66989 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:25:41.630316   66989 fix.go:54] fixHost starting: 
	I0829 20:25:41.630791   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:25:41.630828   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:25:41.646005   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32873
	I0829 20:25:41.646405   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:25:41.646932   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:25:41.646959   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:25:41.647308   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:25:41.647525   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:25:41.647686   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:25:41.649457   66989 fix.go:112] recreateIfNeeded on embed-certs-388383: state=Stopped err=<nil>
	I0829 20:25:41.649491   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	W0829 20:25:41.649639   66989 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:25:41.651109   66989 out.go:177] * Restarting existing kvm2 VM for "embed-certs-388383" ...
	I0829 20:25:41.627651   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:25:41.627705   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:25:41.628067   66841 buildroot.go:166] provisioning hostname "no-preload-397724"
	I0829 20:25:41.628089   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:25:41.628259   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:25:41.630106   66841 machine.go:96] duration metric: took 4m35.46951337s to provisionDockerMachine
	I0829 20:25:41.630148   66841 fix.go:56] duration metric: took 4m35.494271139s for fixHost
	I0829 20:25:41.630159   66841 start.go:83] releasing machines lock for "no-preload-397724", held for 4m35.494325078s
	W0829 20:25:41.630182   66841 start.go:714] error starting host: provision: host is not running
	W0829 20:25:41.630284   66841 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0829 20:25:41.630295   66841 start.go:729] Will try again in 5 seconds ...
	I0829 20:25:41.652159   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Start
	I0829 20:25:41.652318   66989 main.go:141] libmachine: (embed-certs-388383) Ensuring networks are active...
	I0829 20:25:41.653011   66989 main.go:141] libmachine: (embed-certs-388383) Ensuring network default is active
	I0829 20:25:41.653426   66989 main.go:141] libmachine: (embed-certs-388383) Ensuring network mk-embed-certs-388383 is active
	I0829 20:25:41.653824   66989 main.go:141] libmachine: (embed-certs-388383) Getting domain xml...
	I0829 20:25:41.654765   66989 main.go:141] libmachine: (embed-certs-388383) Creating domain...
	I0829 20:25:42.860512   66989 main.go:141] libmachine: (embed-certs-388383) Waiting to get IP...
	I0829 20:25:42.861297   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:42.861661   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:42.861739   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:42.861649   68412 retry.go:31] will retry after 207.172422ms: waiting for machine to come up
	I0829 20:25:43.070026   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:43.070414   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:43.070445   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:43.070368   68412 retry.go:31] will retry after 336.815982ms: waiting for machine to come up
	I0829 20:25:43.408817   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:43.409144   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:43.409182   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:43.409117   68412 retry.go:31] will retry after 330.159156ms: waiting for machine to come up
	I0829 20:25:43.740518   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:43.741039   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:43.741065   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:43.741002   68412 retry.go:31] will retry after 528.906592ms: waiting for machine to come up
	I0829 20:25:44.271695   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:44.272286   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:44.272344   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:44.272280   68412 retry.go:31] will retry after 616.92568ms: waiting for machine to come up
	I0829 20:25:46.631383   66841 start.go:360] acquireMachinesLock for no-preload-397724: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 20:25:44.891133   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:44.891535   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:44.891566   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:44.891499   68412 retry.go:31] will retry after 907.330558ms: waiting for machine to come up
	I0829 20:25:45.800480   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:45.800858   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:45.800885   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:45.800840   68412 retry.go:31] will retry after 1.189775318s: waiting for machine to come up
	I0829 20:25:46.992687   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:46.993155   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:46.993189   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:46.993142   68412 retry.go:31] will retry after 1.467244635s: waiting for machine to come up
	I0829 20:25:48.462770   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:48.463201   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:48.463226   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:48.463173   68412 retry.go:31] will retry after 1.602764839s: waiting for machine to come up
	I0829 20:25:50.067082   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:50.067608   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:50.067638   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:50.067543   68412 retry.go:31] will retry after 1.562244323s: waiting for machine to come up
	I0829 20:25:51.632201   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:51.632705   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:51.632731   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:51.632650   68412 retry.go:31] will retry after 1.747220365s: waiting for machine to come up
	I0829 20:25:53.382010   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:53.382463   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:53.382527   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:53.382454   68412 retry.go:31] will retry after 3.446054845s: waiting for machine to come up
	I0829 20:25:56.830511   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:56.830954   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:56.830988   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:56.830908   68412 retry.go:31] will retry after 4.53995219s: waiting for machine to come up
	I0829 20:26:02.603329   67607 start.go:364] duration metric: took 3m23.680319578s to acquireMachinesLock for "old-k8s-version-032002"
	I0829 20:26:02.603393   67607 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:26:02.603404   67607 fix.go:54] fixHost starting: 
	I0829 20:26:02.603837   67607 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:02.603884   67607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:02.621398   67607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35977
	I0829 20:26:02.621840   67607 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:02.622425   67607 main.go:141] libmachine: Using API Version  1
	I0829 20:26:02.622460   67607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:02.622810   67607 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:02.623040   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:02.623201   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetState
	I0829 20:26:02.624854   67607 fix.go:112] recreateIfNeeded on old-k8s-version-032002: state=Stopped err=<nil>
	I0829 20:26:02.624880   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	W0829 20:26:02.625020   67607 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:26:02.627161   67607 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-032002" ...
	I0829 20:26:02.628419   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .Start
	I0829 20:26:02.628578   67607 main.go:141] libmachine: (old-k8s-version-032002) Ensuring networks are active...
	I0829 20:26:02.629339   67607 main.go:141] libmachine: (old-k8s-version-032002) Ensuring network default is active
	I0829 20:26:02.629732   67607 main.go:141] libmachine: (old-k8s-version-032002) Ensuring network mk-old-k8s-version-032002 is active
	I0829 20:26:02.630188   67607 main.go:141] libmachine: (old-k8s-version-032002) Getting domain xml...
	I0829 20:26:02.630924   67607 main.go:141] libmachine: (old-k8s-version-032002) Creating domain...
	I0829 20:26:01.375542   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.375928   66989 main.go:141] libmachine: (embed-certs-388383) Found IP for machine: 192.168.61.202
	I0829 20:26:01.375951   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has current primary IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.375974   66989 main.go:141] libmachine: (embed-certs-388383) Reserving static IP address...
	I0829 20:26:01.376364   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "embed-certs-388383", mac: "52:54:00:6c:5a:0c", ip: "192.168.61.202"} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.376398   66989 main.go:141] libmachine: (embed-certs-388383) DBG | skip adding static IP to network mk-embed-certs-388383 - found existing host DHCP lease matching {name: "embed-certs-388383", mac: "52:54:00:6c:5a:0c", ip: "192.168.61.202"}
	I0829 20:26:01.376411   66989 main.go:141] libmachine: (embed-certs-388383) Reserved static IP address: 192.168.61.202
	I0829 20:26:01.376428   66989 main.go:141] libmachine: (embed-certs-388383) Waiting for SSH to be available...
	I0829 20:26:01.376445   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Getting to WaitForSSH function...
	I0829 20:26:01.378600   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.378899   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.378937   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.379065   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Using SSH client type: external
	I0829 20:26:01.379088   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa (-rw-------)
	I0829 20:26:01.379118   66989 main.go:141] libmachine: (embed-certs-388383) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:01.379132   66989 main.go:141] libmachine: (embed-certs-388383) DBG | About to run SSH command:
	I0829 20:26:01.379141   66989 main.go:141] libmachine: (embed-certs-388383) DBG | exit 0
	I0829 20:26:01.498736   66989 main.go:141] libmachine: (embed-certs-388383) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:01.499103   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetConfigRaw
	I0829 20:26:01.499700   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetIP
	I0829 20:26:01.502022   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.502332   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.502362   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.502586   66989 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/config.json ...
	I0829 20:26:01.502778   66989 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:01.502795   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:01.502980   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.505156   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.505452   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.505473   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.505590   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:01.505739   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.505902   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.506038   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:01.506183   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:01.506366   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:01.506376   66989 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:01.602691   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:01.602721   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetMachineName
	I0829 20:26:01.603002   66989 buildroot.go:166] provisioning hostname "embed-certs-388383"
	I0829 20:26:01.603033   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetMachineName
	I0829 20:26:01.603232   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.605841   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.606170   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.606201   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.606333   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:01.606505   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.606672   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.606786   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:01.606950   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:01.607121   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:01.607144   66989 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-388383 && echo "embed-certs-388383" | sudo tee /etc/hostname
	I0829 20:26:01.717669   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-388383
	
	I0829 20:26:01.717709   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.720400   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.720705   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.720733   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.720863   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:01.721097   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.721280   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.721446   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:01.721585   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:01.721811   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:01.721842   66989 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-388383' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-388383/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-388383' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:01.827800   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:01.827835   66989 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:01.827869   66989 buildroot.go:174] setting up certificates
	I0829 20:26:01.827882   66989 provision.go:84] configureAuth start
	I0829 20:26:01.827894   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetMachineName
	I0829 20:26:01.828214   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetIP
	I0829 20:26:01.830619   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.831150   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.831184   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.831339   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.833642   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.833961   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.833987   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.834161   66989 provision.go:143] copyHostCerts
	I0829 20:26:01.834217   66989 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:01.834241   66989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:01.834322   66989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:01.834445   66989 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:01.834457   66989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:01.834491   66989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:01.834608   66989 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:01.834621   66989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:01.834660   66989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:01.834726   66989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.embed-certs-388383 san=[127.0.0.1 192.168.61.202 embed-certs-388383 localhost minikube]
	I0829 20:26:01.992735   66989 provision.go:177] copyRemoteCerts
	I0829 20:26:01.992794   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:01.992819   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.995463   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.995835   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.995862   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.996006   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:01.996179   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.996333   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:01.996460   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:02.077017   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:02.105498   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0829 20:26:02.133974   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 20:26:02.161330   66989 provision.go:87] duration metric: took 333.435119ms to configureAuth
	I0829 20:26:02.161362   66989 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:02.161579   66989 config.go:182] Loaded profile config "embed-certs-388383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:26:02.161707   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.164373   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.164696   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.164724   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.164909   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.165111   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.165276   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.165402   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.165535   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:02.165697   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:02.165711   66989 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:02.377994   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:02.378022   66989 machine.go:96] duration metric: took 875.231112ms to provisionDockerMachine
	I0829 20:26:02.378037   66989 start.go:293] postStartSetup for "embed-certs-388383" (driver="kvm2")
	I0829 20:26:02.378053   66989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:02.378078   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.378404   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:02.378432   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.380920   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.381329   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.381358   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.381564   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.381797   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.381975   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.382124   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:02.461053   66989 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:02.465391   66989 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:02.465417   66989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:02.465479   66989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:02.465550   66989 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:02.465635   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:02.474909   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:02.500025   66989 start.go:296] duration metric: took 121.973853ms for postStartSetup
	I0829 20:26:02.500064   66989 fix.go:56] duration metric: took 20.86974885s for fixHost
	I0829 20:26:02.500082   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.502976   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.503380   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.503411   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.503599   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.503808   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.503976   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.504126   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.504283   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:02.504459   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:02.504469   66989 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:02.603161   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963162.568310162
	
	I0829 20:26:02.603181   66989 fix.go:216] guest clock: 1724963162.568310162
	I0829 20:26:02.603187   66989 fix.go:229] Guest: 2024-08-29 20:26:02.568310162 +0000 UTC Remote: 2024-08-29 20:26:02.500067292 +0000 UTC m=+288.185978445 (delta=68.24287ms)
	I0829 20:26:02.603210   66989 fix.go:200] guest clock delta is within tolerance: 68.24287ms
	I0829 20:26:02.603216   66989 start.go:83] releasing machines lock for "embed-certs-388383", held for 20.972921408s
	I0829 20:26:02.603248   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.603532   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetIP
	I0829 20:26:02.606426   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.606804   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.606834   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.607021   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.607527   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.607694   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.607770   66989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:02.607809   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.607878   66989 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:02.607896   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.610239   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.610264   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.610657   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.610685   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.610723   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.610742   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.610844   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.611014   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.611014   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.611145   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.611208   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.611268   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.611341   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:02.611399   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:02.712435   66989 ssh_runner.go:195] Run: systemctl --version
	I0829 20:26:02.718614   66989 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:26:02.865138   66989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:26:02.871510   66989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:26:02.871593   66989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:26:02.887316   66989 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:26:02.887340   66989 start.go:495] detecting cgroup driver to use...
	I0829 20:26:02.887394   66989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:26:02.905024   66989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:26:02.918922   66989 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:26:02.918986   66989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:26:02.932660   66989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:26:02.946679   66989 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:26:03.056273   66989 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:26:03.216885   66989 docker.go:233] disabling docker service ...
	I0829 20:26:03.216959   66989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:26:03.231363   66989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:26:03.245609   66989 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:26:03.368087   66989 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:26:03.493947   66989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:26:03.508803   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:26:03.527542   66989 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 20:26:03.527607   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.538301   66989 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:26:03.538370   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.549672   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.562203   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.573572   66989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:26:03.585031   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.596778   66989 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.619405   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.630337   66989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:26:03.640492   66989 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:26:03.640568   66989 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:26:03.657931   66989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:26:03.673756   66989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:03.792856   66989 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:26:03.880493   66989 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:26:03.880551   66989 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:26:03.885793   66989 start.go:563] Will wait 60s for crictl version
	I0829 20:26:03.885850   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:26:03.889835   66989 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:26:03.928633   66989 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:26:03.928702   66989 ssh_runner.go:195] Run: crio --version
	I0829 20:26:03.958861   66989 ssh_runner.go:195] Run: crio --version
	I0829 20:26:03.987724   66989 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 20:26:03.989009   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetIP
	I0829 20:26:03.991889   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:03.992308   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:03.992334   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:03.992567   66989 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0829 20:26:03.996945   66989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:04.009353   66989 kubeadm.go:883] updating cluster {Name:embed-certs-388383 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-388383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:26:04.009462   66989 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:26:04.009501   66989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:04.051583   66989 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 20:26:04.051643   66989 ssh_runner.go:195] Run: which lz4
	I0829 20:26:04.055929   66989 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 20:26:04.060214   66989 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 20:26:04.060240   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 20:26:03.867691   67607 main.go:141] libmachine: (old-k8s-version-032002) Waiting to get IP...
	I0829 20:26:03.868798   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:03.869246   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:03.869318   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:03.869235   68552 retry.go:31] will retry after 220.928648ms: waiting for machine to come up
	I0829 20:26:04.091675   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:04.092057   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:04.092084   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:04.092020   68552 retry.go:31] will retry after 352.781755ms: waiting for machine to come up
	I0829 20:26:04.446766   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:04.447277   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:04.447301   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:04.447224   68552 retry.go:31] will retry after 480.96031ms: waiting for machine to come up
	I0829 20:26:04.929561   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:04.930149   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:04.930181   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:04.930051   68552 retry.go:31] will retry after 415.057247ms: waiting for machine to come up
	I0829 20:26:05.346757   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:05.347224   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:05.347258   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:05.347196   68552 retry.go:31] will retry after 609.958508ms: waiting for machine to come up
	I0829 20:26:05.959227   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:05.959774   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:05.959825   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:05.959702   68552 retry.go:31] will retry after 680.801337ms: waiting for machine to come up
	I0829 20:26:06.642811   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:06.643312   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:06.643343   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:06.643269   68552 retry.go:31] will retry after 995.561322ms: waiting for machine to come up
	I0829 20:26:07.640147   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:07.640617   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:07.640652   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:07.640588   68552 retry.go:31] will retry after 1.22436043s: waiting for machine to come up
	I0829 20:26:05.472272   66989 crio.go:462] duration metric: took 1.416373513s to copy over tarball
	I0829 20:26:05.472355   66989 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 20:26:07.583560   66989 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.111164398s)
	I0829 20:26:07.583595   66989 crio.go:469] duration metric: took 2.111297179s to extract the tarball
	I0829 20:26:07.583605   66989 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 20:26:07.622447   66989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:07.671704   66989 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 20:26:07.671732   66989 cache_images.go:84] Images are preloaded, skipping loading
	I0829 20:26:07.671742   66989 kubeadm.go:934] updating node { 192.168.61.202 8443 v1.31.0 crio true true} ...
	I0829 20:26:07.671869   66989 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-388383 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-388383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:26:07.671958   66989 ssh_runner.go:195] Run: crio config
	I0829 20:26:07.717217   66989 cni.go:84] Creating CNI manager for ""
	I0829 20:26:07.717242   66989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:07.717263   66989 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:26:07.717290   66989 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.202 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-388383 NodeName:embed-certs-388383 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 20:26:07.717465   66989 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-388383"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:26:07.717549   66989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 20:26:07.727174   66989 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:26:07.727258   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:26:07.736512   66989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0829 20:26:07.752727   66989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:26:07.772430   66989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0829 20:26:07.793343   66989 ssh_runner.go:195] Run: grep 192.168.61.202	control-plane.minikube.internal$ /etc/hosts
	I0829 20:26:07.798214   66989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:07.811285   66989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:07.927025   66989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:07.943741   66989 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383 for IP: 192.168.61.202
	I0829 20:26:07.943765   66989 certs.go:194] generating shared ca certs ...
	I0829 20:26:07.943784   66989 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:07.943984   66989 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:26:07.944047   66989 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:26:07.944061   66989 certs.go:256] generating profile certs ...
	I0829 20:26:07.944177   66989 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/client.key
	I0829 20:26:07.944254   66989 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/apiserver.key.03b29390
	I0829 20:26:07.944317   66989 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/proxy-client.key
	I0829 20:26:07.944494   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:26:07.944538   66989 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:26:07.944551   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:26:07.944581   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:26:07.944605   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:26:07.944628   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:26:07.944670   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:07.945252   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:26:07.971277   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:26:08.012892   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:26:08.042038   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:26:08.067708   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0829 20:26:08.095930   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 20:26:08.127171   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:26:08.151287   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 20:26:08.175525   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:26:08.199076   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:26:08.222783   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:26:08.245783   66989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:26:08.261839   66989 ssh_runner.go:195] Run: openssl version
	I0829 20:26:08.267545   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:26:08.278347   66989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:26:08.284232   66989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:26:08.284283   66989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:26:08.292024   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:26:08.306831   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:26:08.320607   66989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:26:08.325027   66989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:26:08.325070   66989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:26:08.330808   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:26:08.341457   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:26:08.352323   66989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:08.356822   66989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:08.356891   66989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:08.362617   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:26:08.373755   66989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:26:08.378153   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:26:08.384225   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:26:08.390136   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:26:08.396002   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:26:08.401713   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:26:08.407437   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:26:08.413033   66989 kubeadm.go:392] StartCluster: {Name:embed-certs-388383 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-388383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:26:08.413119   66989 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:26:08.413173   66989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:08.450685   66989 cri.go:89] found id: ""
	I0829 20:26:08.450757   66989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:26:08.460787   66989 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:26:08.460809   66989 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:26:08.460853   66989 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:26:08.470179   66989 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:26:08.471673   66989 kubeconfig.go:125] found "embed-certs-388383" server: "https://192.168.61.202:8443"
	I0829 20:26:08.474839   66989 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:26:08.483951   66989 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.202
	I0829 20:26:08.483992   66989 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:26:08.484007   66989 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:26:08.484085   66989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:08.525947   66989 cri.go:89] found id: ""
	I0829 20:26:08.526013   66989 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:26:08.541862   66989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:26:08.551179   66989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:26:08.551200   66989 kubeadm.go:157] found existing configuration files:
	
	I0829 20:26:08.551249   66989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:26:08.559897   66989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:26:08.559970   66989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:26:08.569317   66989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:26:08.577858   66989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:26:08.577905   66989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:26:08.587113   66989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:26:08.595645   66989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:26:08.595705   66989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:26:08.604803   66989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:26:08.613070   66989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:26:08.613125   66989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:26:08.622037   66989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:26:08.631330   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:08.742682   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:08.866518   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:08.866954   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:08.866985   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:08.866896   68552 retry.go:31] will retry after 1.707701085s: waiting for machine to come up
	I0829 20:26:10.576676   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:10.577094   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:10.577124   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:10.577047   68552 retry.go:31] will retry after 1.496799212s: waiting for machine to come up
	I0829 20:26:12.075964   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:12.076412   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:12.076451   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:12.076377   68552 retry.go:31] will retry after 2.246779697s: waiting for machine to come up
	I0829 20:26:09.809078   66989 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.066360218s)
	I0829 20:26:09.809118   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:10.027517   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:10.095959   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:10.199656   66989 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:26:10.199745   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:10.700569   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:11.200798   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:11.700664   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:12.200052   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:12.700839   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:12.715319   66989 api_server.go:72] duration metric: took 2.515661322s to wait for apiserver process to appear ...
	I0829 20:26:12.715351   66989 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:26:12.715374   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:15.687527   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:26:15.687558   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:26:15.687572   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:15.716339   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:26:15.716365   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:26:15.716378   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:15.750700   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:15.750732   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:16.216255   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:16.224376   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:16.224401   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:16.715457   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:16.723983   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:16.724004   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:17.215562   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:17.219605   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0829 20:26:17.225473   66989 api_server.go:141] control plane version: v1.31.0
	I0829 20:26:17.225496   66989 api_server.go:131] duration metric: took 4.510137186s to wait for apiserver health ...
	I0829 20:26:17.225504   66989 cni.go:84] Creating CNI manager for ""
	I0829 20:26:17.225509   66989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:17.227379   66989 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:26:14.324452   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:14.324770   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:14.324808   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:14.324748   68552 retry.go:31] will retry after 3.172592587s: waiting for machine to come up
	I0829 20:26:17.500203   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:17.500540   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:17.500573   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:17.500485   68552 retry.go:31] will retry after 2.81386002s: waiting for machine to come up
	I0829 20:26:17.228505   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:26:17.238762   66989 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:26:17.264380   66989 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:26:17.274981   66989 system_pods.go:59] 8 kube-system pods found
	I0829 20:26:17.275009   66989 system_pods.go:61] "coredns-6f6b679f8f-dg6t6" [92e89b20-ebf4-4738-8ca7-9dc2a0e5653a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:26:17.275016   66989 system_pods.go:61] "etcd-embed-certs-388383" [a688325a-9ed2-488d-a1a1-aa440e37fa9f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 20:26:17.275023   66989 system_pods.go:61] "kube-apiserver-embed-certs-388383" [7a1b715b-87a3-44e0-868d-a3184f5b9f61] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 20:26:17.275028   66989 system_pods.go:61] "kube-controller-manager-embed-certs-388383" [9d942083-4d39-448c-8151-424ea9d5e6af] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 20:26:17.275033   66989 system_pods.go:61] "kube-proxy-fcxs4" [649b40c8-4f4b-40d1-8179-baf378d4c7d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 20:26:17.275038   66989 system_pods.go:61] "kube-scheduler-embed-certs-388383" [87b73013-dfad-411d-aaa9-f2c0e39fb920] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 20:26:17.275043   66989 system_pods.go:61] "metrics-server-6867b74b74-mx5jh" [99e21acd-b7b8-4e6f-8c75-c112206aed89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:26:17.275048   66989 system_pods.go:61] "storage-provisioner" [021ca156-b7a8-4647-8efe-db17968fd5a8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 20:26:17.275056   66989 system_pods.go:74] duration metric: took 10.656426ms to wait for pod list to return data ...
	I0829 20:26:17.275074   66989 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:26:17.279480   66989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:26:17.279504   66989 node_conditions.go:123] node cpu capacity is 2
	I0829 20:26:17.279519   66989 node_conditions.go:105] duration metric: took 4.439469ms to run NodePressure ...
	I0829 20:26:17.279537   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:17.561282   66989 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 20:26:17.565287   66989 kubeadm.go:739] kubelet initialised
	I0829 20:26:17.565307   66989 kubeadm.go:740] duration metric: took 4.002605ms waiting for restarted kubelet to initialise ...
	I0829 20:26:17.565314   66989 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:26:17.570104   66989 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:17.576425   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.576454   66989 pod_ready.go:82] duration metric: took 6.324083ms for pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:17.576464   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.576474   66989 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:17.582501   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "etcd-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.582523   66989 pod_ready.go:82] duration metric: took 6.040325ms for pod "etcd-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:17.582547   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "etcd-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.582556   66989 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:17.588534   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.588554   66989 pod_ready.go:82] duration metric: took 5.988678ms for pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:17.588562   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.588568   66989 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:17.668334   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.668365   66989 pod_ready.go:82] duration metric: took 79.787211ms for pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:17.668378   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.668386   66989 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fcxs4" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:18.068248   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "kube-proxy-fcxs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.068286   66989 pod_ready.go:82] duration metric: took 399.880238ms for pod "kube-proxy-fcxs4" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:18.068299   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "kube-proxy-fcxs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.068308   66989 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:18.468096   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.468126   66989 pod_ready.go:82] duration metric: took 399.810823ms for pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:18.468134   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.468141   66989 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:18.868444   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.868478   66989 pod_ready.go:82] duration metric: took 400.329102ms for pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:18.868490   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.868499   66989 pod_ready.go:39] duration metric: took 1.303176044s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:26:18.868519   66989 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 20:26:18.880892   66989 ops.go:34] apiserver oom_adj: -16
	I0829 20:26:18.880916   66989 kubeadm.go:597] duration metric: took 10.42010114s to restartPrimaryControlPlane
	I0829 20:26:18.880925   66989 kubeadm.go:394] duration metric: took 10.467899141s to StartCluster
	I0829 20:26:18.880946   66989 settings.go:142] acquiring lock: {Name:mka4cd5ddff5796cd0ca11509c181178f4f73529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:18.881032   66989 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:26:18.884130   66989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:18.884619   66989 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 20:26:18.884674   66989 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 20:26:18.884749   66989 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-388383"
	I0829 20:26:18.884765   66989 addons.go:69] Setting default-storageclass=true in profile "embed-certs-388383"
	I0829 20:26:18.884783   66989 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-388383"
	W0829 20:26:18.884792   66989 addons.go:243] addon storage-provisioner should already be in state true
	I0829 20:26:18.884804   66989 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-388383"
	I0829 20:26:18.884816   66989 addons.go:69] Setting metrics-server=true in profile "embed-certs-388383"
	I0829 20:26:18.884828   66989 host.go:66] Checking if "embed-certs-388383" exists ...
	I0829 20:26:18.884856   66989 addons.go:234] Setting addon metrics-server=true in "embed-certs-388383"
	W0829 20:26:18.884877   66989 addons.go:243] addon metrics-server should already be in state true
	I0829 20:26:18.884884   66989 config.go:182] Loaded profile config "embed-certs-388383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:26:18.884912   66989 host.go:66] Checking if "embed-certs-388383" exists ...
	I0829 20:26:18.885134   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.885176   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.885216   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.885249   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.885291   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.885338   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.886484   66989 out.go:177] * Verifying Kubernetes components...
	I0829 20:26:18.887938   66989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:18.900910   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33641
	I0829 20:26:18.901377   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.901917   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.901938   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.902300   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.903062   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.903110   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.903810   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I0829 20:26:18.903824   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38101
	I0829 20:26:18.904282   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.904303   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.904673   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.904691   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.904829   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.904845   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.905017   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.905428   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.905462   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.905664   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.905860   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:26:18.909388   66989 addons.go:234] Setting addon default-storageclass=true in "embed-certs-388383"
	W0829 20:26:18.909408   66989 addons.go:243] addon default-storageclass should already be in state true
	I0829 20:26:18.909437   66989 host.go:66] Checking if "embed-certs-388383" exists ...
	I0829 20:26:18.909793   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.909839   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.921180   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35467
	I0829 20:26:18.921597   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.922074   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.922087   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.922470   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.922697   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:26:18.922725   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39123
	I0829 20:26:18.923052   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.923592   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.923610   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.923919   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.924057   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:26:18.924063   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45681
	I0829 20:26:18.924461   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.924519   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:18.924984   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.925002   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.925632   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.925682   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:18.926152   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.926194   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.926494   66989 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:18.927266   66989 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 20:26:18.928130   66989 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:26:18.928141   66989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 20:26:18.928155   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:18.928843   66989 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 20:26:18.928863   66989 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 20:26:18.928888   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:18.931716   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.932273   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:18.932296   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.932424   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.932456   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:18.932644   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:18.932810   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:18.932869   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:18.932891   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.933050   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:18.933100   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:18.933271   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:18.933426   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:18.933598   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:18.942718   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38109
	I0829 20:26:18.943150   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.943532   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.943553   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.943908   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.944027   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:26:18.945304   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:18.945498   66989 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 20:26:18.945510   66989 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 20:26:18.945522   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:18.948108   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.948469   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:18.948494   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.948730   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:18.948889   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:18.949085   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:18.949222   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:19.111953   66989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:19.131195   66989 node_ready.go:35] waiting up to 6m0s for node "embed-certs-388383" to be "Ready" ...
	I0829 20:26:19.246857   66989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:26:19.269511   66989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 20:26:19.269670   66989 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 20:26:19.269691   66989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 20:26:19.346200   66989 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 20:26:19.346234   66989 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 20:26:19.374530   66989 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:26:19.374566   66989 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 20:26:19.418474   66989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:26:20.495022   66989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.225476769s)
	I0829 20:26:20.495077   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.495090   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.495185   66989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.248286753s)
	I0829 20:26:20.495232   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.495249   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.495572   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.495600   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.495611   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.495619   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.495634   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.495663   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Closing plugin on server side
	I0829 20:26:20.495664   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.495678   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.495688   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.496014   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.496029   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.496061   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Closing plugin on server side
	I0829 20:26:20.496097   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.496111   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.504149   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.504182   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.504419   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.504436   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.519341   66989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.100829284s)
	I0829 20:26:20.519396   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.519422   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.519670   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Closing plugin on server side
	I0829 20:26:20.519716   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.519734   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.519746   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.519755   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.520040   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.520055   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.520072   66989 addons.go:475] Verifying addon metrics-server=true in "embed-certs-388383"
	I0829 20:26:20.523102   66989 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0829 20:26:21.515365   68084 start.go:364] duration metric: took 2m4.795762476s to acquireMachinesLock for "default-k8s-diff-port-145096"
	I0829 20:26:21.515428   68084 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:26:21.515439   68084 fix.go:54] fixHost starting: 
	I0829 20:26:21.515864   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:21.515904   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:21.535441   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33171
	I0829 20:26:21.535886   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:21.536390   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:26:21.536414   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:21.536819   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:21.537035   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:21.537203   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:26:21.538735   68084 fix.go:112] recreateIfNeeded on default-k8s-diff-port-145096: state=Stopped err=<nil>
	I0829 20:26:21.538762   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	W0829 20:26:21.538901   68084 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:26:21.540852   68084 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-145096" ...
	I0829 20:26:21.542258   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Start
	I0829 20:26:21.542429   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Ensuring networks are active...
	I0829 20:26:21.543181   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Ensuring network default is active
	I0829 20:26:21.543522   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Ensuring network mk-default-k8s-diff-port-145096 is active
	I0829 20:26:21.543872   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Getting domain xml...
	I0829 20:26:21.544627   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Creating domain...
	I0829 20:26:20.317138   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.317672   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has current primary IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.317700   67607 main.go:141] libmachine: (old-k8s-version-032002) Found IP for machine: 192.168.39.116
	I0829 20:26:20.317716   67607 main.go:141] libmachine: (old-k8s-version-032002) Reserving static IP address...
	I0829 20:26:20.318143   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "old-k8s-version-032002", mac: "52:54:00:a8:ca:96", ip: "192.168.39.116"} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.318169   67607 main.go:141] libmachine: (old-k8s-version-032002) Reserved static IP address: 192.168.39.116
	I0829 20:26:20.318189   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | skip adding static IP to network mk-old-k8s-version-032002 - found existing host DHCP lease matching {name: "old-k8s-version-032002", mac: "52:54:00:a8:ca:96", ip: "192.168.39.116"}
	I0829 20:26:20.318208   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | Getting to WaitForSSH function...
	I0829 20:26:20.318217   67607 main.go:141] libmachine: (old-k8s-version-032002) Waiting for SSH to be available...
	I0829 20:26:20.320598   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.320961   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.320989   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.321082   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | Using SSH client type: external
	I0829 20:26:20.321121   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa (-rw-------)
	I0829 20:26:20.321156   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:20.321171   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | About to run SSH command:
	I0829 20:26:20.321185   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | exit 0
	I0829 20:26:20.446805   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:20.447204   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetConfigRaw
	I0829 20:26:20.447944   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:20.450726   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.451120   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.451160   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.451464   67607 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/config.json ...
	I0829 20:26:20.451670   67607 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:20.451690   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:20.451886   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.454120   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.454496   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.454566   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.454648   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.454808   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.454975   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.455123   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.455282   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:20.455520   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:20.455533   67607 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:20.555074   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:20.555100   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetMachineName
	I0829 20:26:20.555331   67607 buildroot.go:166] provisioning hostname "old-k8s-version-032002"
	I0829 20:26:20.555353   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetMachineName
	I0829 20:26:20.555540   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.558576   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.559058   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.559086   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.559273   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.559490   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.559661   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.559834   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.560026   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:20.560189   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:20.560201   67607 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-032002 && echo "old-k8s-version-032002" | sudo tee /etc/hostname
	I0829 20:26:20.675352   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-032002
	
	I0829 20:26:20.675400   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.678472   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.678908   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.678944   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.679139   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.679341   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.679533   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.679710   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.679884   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:20.680090   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:20.680108   67607 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-032002' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-032002/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-032002' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:20.789673   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:20.789713   67607 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:20.789744   67607 buildroot.go:174] setting up certificates
	I0829 20:26:20.789753   67607 provision.go:84] configureAuth start
	I0829 20:26:20.789761   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetMachineName
	I0829 20:26:20.790067   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:20.792822   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.793152   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.793173   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.793338   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.795624   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.795948   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.795974   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.796080   67607 provision.go:143] copyHostCerts
	I0829 20:26:20.796148   67607 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:20.796168   67607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:20.796236   67607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:20.796344   67607 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:20.796355   67607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:20.796387   67607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:20.796467   67607 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:20.796476   67607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:20.796503   67607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:20.796573   67607 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-032002 san=[127.0.0.1 192.168.39.116 localhost minikube old-k8s-version-032002]
	I0829 20:26:20.906382   67607 provision.go:177] copyRemoteCerts
	I0829 20:26:20.906436   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:20.906466   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.909180   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.909488   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.909519   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.909666   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.909831   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.909963   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.910062   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:20.989017   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:21.018571   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0829 20:26:21.043015   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 20:26:21.067288   67607 provision.go:87] duration metric: took 277.522292ms to configureAuth
	I0829 20:26:21.067322   67607 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:21.067527   67607 config.go:182] Loaded profile config "old-k8s-version-032002": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0829 20:26:21.067607   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.070264   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.070642   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.070679   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.070881   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.071088   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.071288   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.071465   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.071661   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:21.071886   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:21.071923   67607 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:21.290979   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:21.291003   67607 machine.go:96] duration metric: took 839.319831ms to provisionDockerMachine
	I0829 20:26:21.291014   67607 start.go:293] postStartSetup for "old-k8s-version-032002" (driver="kvm2")
	I0829 20:26:21.291026   67607 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:21.291046   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.291342   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:21.291366   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.293946   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.294245   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.294273   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.294464   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.294686   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.294840   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.294964   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:21.373592   67607 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:21.377797   67607 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:21.377826   67607 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:21.377892   67607 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:21.377966   67607 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:21.378054   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:21.387886   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:21.413456   67607 start.go:296] duration metric: took 122.429334ms for postStartSetup
	I0829 20:26:21.413497   67607 fix.go:56] duration metric: took 18.810093949s for fixHost
	I0829 20:26:21.413522   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.416095   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.416391   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.416418   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.416594   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.416803   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.416970   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.417115   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.417272   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:21.417474   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:21.417489   67607 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:21.515167   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963181.486447470
	
	I0829 20:26:21.515190   67607 fix.go:216] guest clock: 1724963181.486447470
	I0829 20:26:21.515200   67607 fix.go:229] Guest: 2024-08-29 20:26:21.48644747 +0000 UTC Remote: 2024-08-29 20:26:21.413502498 +0000 UTC m=+222.629982255 (delta=72.944972ms)
	I0829 20:26:21.515225   67607 fix.go:200] guest clock delta is within tolerance: 72.944972ms
	I0829 20:26:21.515232   67607 start.go:83] releasing machines lock for "old-k8s-version-032002", held for 18.911866017s
	I0829 20:26:21.515278   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.515596   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:21.518247   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.518682   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.518710   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.518835   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.519413   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.519589   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.519680   67607 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:21.519736   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.519843   67607 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:21.519869   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.522261   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.522561   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.522614   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.522643   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.522763   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.522919   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.523044   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.523071   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.523073   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.523241   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.523240   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:21.523413   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.523560   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.523712   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:21.599524   67607 ssh_runner.go:195] Run: systemctl --version
	I0829 20:26:21.629122   67607 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:26:21.778437   67607 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:26:21.784642   67607 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:26:21.784714   67607 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:26:21.802019   67607 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:26:21.802043   67607 start.go:495] detecting cgroup driver to use...
	I0829 20:26:21.802100   67607 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:26:21.817407   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:26:21.831514   67607 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:26:21.831578   67607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:26:21.845224   67607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:26:21.858522   67607 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:26:21.972769   67607 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:26:22.115154   67607 docker.go:233] disabling docker service ...
	I0829 20:26:22.115240   67607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:26:22.130015   67607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:26:22.143186   67607 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:26:22.294113   67607 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:26:22.432373   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:26:22.446427   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:26:22.465151   67607 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0829 20:26:22.465218   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.476104   67607 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:26:22.476177   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.486627   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.497782   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.509869   67607 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:26:22.521347   67607 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:26:22.531406   67607 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:26:22.531455   67607 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:26:22.544949   67607 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:26:22.554918   67607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:22.687909   67607 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:26:22.808522   67607 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:26:22.808595   67607 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:26:22.814348   67607 start.go:563] Will wait 60s for crictl version
	I0829 20:26:22.814411   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:22.818348   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:26:22.863797   67607 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:26:22.863883   67607 ssh_runner.go:195] Run: crio --version
	I0829 20:26:22.893173   67607 ssh_runner.go:195] Run: crio --version
	I0829 20:26:22.923146   67607 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0829 20:26:22.924299   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:22.927222   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:22.927564   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:22.927589   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:22.927772   67607 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 20:26:22.932100   67607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:22.945139   67607 kubeadm.go:883] updating cluster {Name:old-k8s-version-032002 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:26:22.945274   67607 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 20:26:22.945334   67607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:22.990592   67607 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 20:26:22.990668   67607 ssh_runner.go:195] Run: which lz4
	I0829 20:26:22.995104   67607 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 20:26:22.999667   67607 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 20:26:22.999703   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0829 20:26:20.524280   66989 addons.go:510] duration metric: took 1.639608208s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0829 20:26:21.135090   66989 node_ready.go:53] node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:23.136839   66989 node_ready.go:53] node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:22.825998   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting to get IP...
	I0829 20:26:22.827278   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:22.827766   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:22.827883   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:22.827750   68757 retry.go:31] will retry after 212.207753ms: waiting for machine to come up
	I0829 20:26:23.041113   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.041553   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.041588   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:23.041508   68757 retry.go:31] will retry after 291.9464ms: waiting for machine to come up
	I0829 20:26:23.335081   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.336072   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.336121   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:23.336041   68757 retry.go:31] will retry after 478.578755ms: waiting for machine to come up
	I0829 20:26:23.816669   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.817178   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.817233   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:23.817087   68757 retry.go:31] will retry after 501.093836ms: waiting for machine to come up
	I0829 20:26:24.319836   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:24.320392   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:24.320418   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:24.320343   68757 retry.go:31] will retry after 524.430407ms: waiting for machine to come up
	I0829 20:26:24.846908   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:24.847388   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:24.847418   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:24.847361   68757 retry.go:31] will retry after 701.573237ms: waiting for machine to come up
	I0829 20:26:25.550328   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:25.550786   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:25.550811   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:25.550727   68757 retry.go:31] will retry after 916.084079ms: waiting for machine to come up
	I0829 20:26:26.468529   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:26.468981   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:26.469012   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:26.468921   68757 retry.go:31] will retry after 1.216322833s: waiting for machine to come up
	I0829 20:26:24.727216   67607 crio.go:462] duration metric: took 1.732148589s to copy over tarball
	I0829 20:26:24.727294   67607 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 20:26:27.715640   67607 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.988318238s)
	I0829 20:26:27.715664   67607 crio.go:469] duration metric: took 2.988419957s to extract the tarball
	I0829 20:26:27.715672   67607 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 20:26:27.764192   67607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:27.797388   67607 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 20:26:27.797422   67607 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 20:26:27.797501   67607 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:27.797536   67607 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0829 20:26:27.797549   67607 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:27.797557   67607 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0829 20:26:27.797511   67607 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:27.797629   67607 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:27.797637   67607 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:27.797519   67607 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:27.799128   67607 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:27.799208   67607 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0829 20:26:27.799251   67607 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0829 20:26:27.799361   67607 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:27.799386   67607 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:27.799463   67607 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:27.799697   67607 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:27.799830   67607 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:27.978022   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:27.978296   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:27.981616   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:27.998987   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.001078   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.004185   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.004672   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0829 20:26:28.103885   67607 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0829 20:26:28.103953   67607 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.104013   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.122203   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:28.129983   67607 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0829 20:26:28.130028   67607 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.130076   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.165427   67607 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0829 20:26:28.165470   67607 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.165521   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.199971   67607 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0829 20:26:28.199990   67607 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0829 20:26:28.200015   67607 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.200021   67607 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.200062   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.200105   67607 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0829 20:26:28.200155   67607 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.200199   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.200204   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.200062   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.200113   67607 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0829 20:26:28.200325   67607 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0829 20:26:28.200356   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.329091   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.329139   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.329187   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.329260   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.329316   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.329362   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:26:28.329316   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.484805   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.484857   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.484888   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.484943   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:26:28.484963   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.485009   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.487351   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.615121   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.615187   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.645371   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.645433   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:26:28.645524   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.645573   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.645638   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0829 20:26:28.729141   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0829 20:26:28.762530   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0829 20:26:28.762592   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0829 20:26:28.782117   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0829 20:26:28.782155   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0829 20:26:28.782195   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0829 20:26:28.782229   67607 cache_images.go:92] duration metric: took 984.791099ms to LoadCachedImages
	W0829 20:26:28.782293   67607 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0829 20:26:28.782310   67607 kubeadm.go:934] updating node { 192.168.39.116 8443 v1.20.0 crio true true} ...
	I0829 20:26:28.782452   67607 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-032002 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:26:28.782518   67607 ssh_runner.go:195] Run: crio config
	I0829 20:26:25.635616   66989 node_ready.go:53] node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:26.635463   66989 node_ready.go:49] node "embed-certs-388383" has status "Ready":"True"
	I0829 20:26:26.635488   66989 node_ready.go:38] duration metric: took 7.504259002s for node "embed-certs-388383" to be "Ready" ...
	I0829 20:26:26.635497   66989 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:26:26.641316   66989 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:26.649602   66989 pod_ready.go:93] pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:26.649634   66989 pod_ready.go:82] duration metric: took 8.284428ms for pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:26.649656   66989 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:28.658281   66989 pod_ready.go:103] pod "etcd-embed-certs-388383" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:27.686642   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:27.687071   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:27.687097   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:27.687030   68757 retry.go:31] will retry after 1.410599528s: waiting for machine to come up
	I0829 20:26:29.099622   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:29.100175   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:29.100207   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:29.100083   68757 retry.go:31] will retry after 1.929618787s: waiting for machine to come up
	I0829 20:26:31.031864   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:31.032434   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:31.032467   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:31.032367   68757 retry.go:31] will retry after 1.926271655s: waiting for machine to come up
	I0829 20:26:28.832785   67607 cni.go:84] Creating CNI manager for ""
	I0829 20:26:28.832807   67607 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:28.832824   67607 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:26:28.832843   67607 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.116 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-032002 NodeName:old-k8s-version-032002 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0829 20:26:28.832982   67607 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-032002"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:26:28.833059   67607 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0829 20:26:28.843483   67607 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:26:28.843566   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:26:28.853276   67607 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0829 20:26:28.870579   67607 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:26:28.888053   67607 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0829 20:26:28.905988   67607 ssh_runner.go:195] Run: grep 192.168.39.116	control-plane.minikube.internal$ /etc/hosts
	I0829 20:26:28.910048   67607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:28.924996   67607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:29.075015   67607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:29.095381   67607 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002 for IP: 192.168.39.116
	I0829 20:26:29.095411   67607 certs.go:194] generating shared ca certs ...
	I0829 20:26:29.095430   67607 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:29.095605   67607 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:26:29.095686   67607 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:26:29.095706   67607 certs.go:256] generating profile certs ...
	I0829 20:26:29.095847   67607 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.key
	I0829 20:26:29.095928   67607 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.key.a1a2aebb
	I0829 20:26:29.095984   67607 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.key
	I0829 20:26:29.096135   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:26:29.096184   67607 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:26:29.096198   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:26:29.096227   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:26:29.096259   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:26:29.096299   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:26:29.096378   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:29.097276   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:26:29.144259   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:26:29.171420   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:26:29.198554   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:26:29.230750   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0829 20:26:29.269978   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 20:26:29.299839   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:26:29.333742   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 20:26:29.358352   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:26:29.382648   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:26:29.406773   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:26:29.434106   67607 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:26:29.451913   67607 ssh_runner.go:195] Run: openssl version
	I0829 20:26:29.457722   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:26:29.469147   67607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:26:29.474048   67607 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:26:29.474094   67607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:26:29.480082   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:26:29.491083   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:26:29.501994   67607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:29.508594   67607 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:29.508643   67607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:29.516331   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:26:29.531067   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:26:29.543998   67607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:26:29.548781   67607 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:26:29.548845   67607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:26:29.555052   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:26:29.567902   67607 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:26:29.572879   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:26:29.579506   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:26:29.585887   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:26:29.592262   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:26:29.598566   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:26:29.604672   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:26:29.610830   67607 kubeadm.go:392] StartCluster: {Name:old-k8s-version-032002 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:26:29.612915   67607 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:26:29.613015   67607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:29.655224   67607 cri.go:89] found id: ""
	I0829 20:26:29.655314   67607 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:26:29.666216   67607 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:26:29.666241   67607 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:26:29.666292   67607 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:26:29.676908   67607 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:26:29.678276   67607 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-032002" does not appear in /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:26:29.679313   67607 kubeconfig.go:62] /home/jenkins/minikube-integration/19530-11185/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-032002" cluster setting kubeconfig missing "old-k8s-version-032002" context setting]
	I0829 20:26:29.680756   67607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:29.764872   67607 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:26:29.776873   67607 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.116
	I0829 20:26:29.776914   67607 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:26:29.776926   67607 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:26:29.776987   67607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:29.819268   67607 cri.go:89] found id: ""
	I0829 20:26:29.819347   67607 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:26:29.840386   67607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:26:29.851624   67607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:26:29.851650   67607 kubeadm.go:157] found existing configuration files:
	
	I0829 20:26:29.851710   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:26:29.861439   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:26:29.861504   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:26:29.871594   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:26:29.881126   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:26:29.881199   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:26:29.890984   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:26:29.900838   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:26:29.900913   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:26:29.910677   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:26:29.920008   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:26:29.920073   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:26:29.929631   67607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:26:29.939864   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:30.096029   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:30.816696   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:31.043310   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:31.139291   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:31.248095   67607 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:26:31.248190   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:31.749101   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:32.248718   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:32.748783   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:33.248254   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:33.748557   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:30.180025   66989 pod_ready.go:93] pod "etcd-embed-certs-388383" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:30.180056   66989 pod_ready.go:82] duration metric: took 3.530390258s for pod "etcd-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:30.180069   66989 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.187272   66989 pod_ready.go:93] pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:32.187300   66989 pod_ready.go:82] duration metric: took 2.007222016s for pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.187313   66989 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.192038   66989 pod_ready.go:93] pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:32.192062   66989 pod_ready.go:82] duration metric: took 4.740656ms for pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.192075   66989 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fcxs4" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.196712   66989 pod_ready.go:93] pod "kube-proxy-fcxs4" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:32.196736   66989 pod_ready.go:82] duration metric: took 4.653538ms for pod "kube-proxy-fcxs4" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.196748   66989 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.200491   66989 pod_ready.go:93] pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:32.200517   66989 pod_ready.go:82] duration metric: took 3.758002ms for pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.200528   66989 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:34.207857   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:32.960872   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:32.961256   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:32.961284   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:32.961208   68757 retry.go:31] will retry after 2.304628323s: waiting for machine to come up
	I0829 20:26:35.267593   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:35.268009   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:35.268041   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:35.267970   68757 retry.go:31] will retry after 3.753063387s: waiting for machine to come up
	I0829 20:26:34.249231   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:34.748279   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:35.249171   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:35.748943   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:36.249181   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:36.748307   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:37.248484   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:37.748261   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:38.248332   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:38.748423   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:36.705814   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:38.708205   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:40.175557   66841 start.go:364] duration metric: took 53.54411059s to acquireMachinesLock for "no-preload-397724"
	I0829 20:26:40.175617   66841 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:26:40.175626   66841 fix.go:54] fixHost starting: 
	I0829 20:26:40.176060   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:40.176098   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:40.193828   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45897
	I0829 20:26:40.194231   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:40.194840   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:26:40.194867   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:40.195175   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:40.195364   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:40.195528   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:26:40.197109   66841 fix.go:112] recreateIfNeeded on no-preload-397724: state=Stopped err=<nil>
	I0829 20:26:40.197128   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	W0829 20:26:40.197278   66841 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:26:40.199263   66841 out.go:177] * Restarting existing kvm2 VM for "no-preload-397724" ...
	I0829 20:26:39.023902   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.024374   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Found IP for machine: 192.168.72.140
	I0829 20:26:39.024399   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has current primary IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.024413   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Reserving static IP address...
	I0829 20:26:39.024832   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Reserved static IP address: 192.168.72.140
	I0829 20:26:39.024856   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for SSH to be available...
	I0829 20:26:39.024894   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-145096", mac: "52:54:00:36:fe:e0", ip: "192.168.72.140"} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.024925   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | skip adding static IP to network mk-default-k8s-diff-port-145096 - found existing host DHCP lease matching {name: "default-k8s-diff-port-145096", mac: "52:54:00:36:fe:e0", ip: "192.168.72.140"}
	I0829 20:26:39.024947   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Getting to WaitForSSH function...
	I0829 20:26:39.026796   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.027100   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.027129   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.027265   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Using SSH client type: external
	I0829 20:26:39.027288   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa (-rw-------)
	I0829 20:26:39.027318   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:39.027333   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | About to run SSH command:
	I0829 20:26:39.027346   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | exit 0
	I0829 20:26:39.146830   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:39.147242   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetConfigRaw
	I0829 20:26:39.147931   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetIP
	I0829 20:26:39.150652   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.151055   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.151084   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.151395   68084 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/config.json ...
	I0829 20:26:39.151581   68084 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:39.151601   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:39.151814   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.153861   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.154189   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.154222   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.154351   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.154575   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.154746   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.154875   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.155010   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:39.155219   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:39.155235   68084 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:39.258973   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:39.259006   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetMachineName
	I0829 20:26:39.259261   68084 buildroot.go:166] provisioning hostname "default-k8s-diff-port-145096"
	I0829 20:26:39.259292   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetMachineName
	I0829 20:26:39.259467   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.262018   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.262472   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.262501   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.262707   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.262886   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.263034   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.263185   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.263344   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:39.263530   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:39.263547   68084 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-145096 && echo "default-k8s-diff-port-145096" | sudo tee /etc/hostname
	I0829 20:26:39.379437   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-145096
	
	I0829 20:26:39.379479   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.382263   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.382682   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.382704   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.382913   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.383128   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.383280   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.383389   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.383520   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:39.383675   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:39.383692   68084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-145096' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-145096/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-145096' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:39.491756   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:39.491790   68084 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:39.491855   68084 buildroot.go:174] setting up certificates
	I0829 20:26:39.491869   68084 provision.go:84] configureAuth start
	I0829 20:26:39.491883   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetMachineName
	I0829 20:26:39.492150   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetIP
	I0829 20:26:39.494882   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.495241   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.495269   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.495452   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.497708   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.497980   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.498013   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.498097   68084 provision.go:143] copyHostCerts
	I0829 20:26:39.498157   68084 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:39.498179   68084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:39.498249   68084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:39.498347   68084 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:39.498356   68084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:39.498377   68084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:39.498430   68084 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:39.498437   68084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:39.498455   68084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:39.498507   68084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-145096 san=[127.0.0.1 192.168.72.140 default-k8s-diff-port-145096 localhost minikube]
	I0829 20:26:39.584313   68084 provision.go:177] copyRemoteCerts
	I0829 20:26:39.584372   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:39.584398   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.587054   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.587377   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.587400   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.587630   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.587823   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.587952   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.588087   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:26:39.664394   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:39.688852   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0829 20:26:39.714653   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 20:26:39.737662   68084 provision.go:87] duration metric: took 245.781265ms to configureAuth
	I0829 20:26:39.737687   68084 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:39.737844   68084 config.go:182] Loaded profile config "default-k8s-diff-port-145096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:26:39.737911   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.740391   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.740659   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.740688   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.740911   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.741107   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.741256   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.741434   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.741612   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:39.741777   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:39.741794   68084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:39.954811   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:39.954846   68084 machine.go:96] duration metric: took 803.251945ms to provisionDockerMachine
	I0829 20:26:39.954862   68084 start.go:293] postStartSetup for "default-k8s-diff-port-145096" (driver="kvm2")
	I0829 20:26:39.954877   68084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:39.954898   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:39.955237   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:39.955267   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.958071   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.958575   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.958605   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.958772   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.958969   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.959126   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.959287   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:26:40.037153   68084 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:40.041150   68084 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:40.041176   68084 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:40.041235   68084 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:40.041325   68084 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:40.041415   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:40.050654   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:40.073789   68084 start.go:296] duration metric: took 118.907407ms for postStartSetup
	I0829 20:26:40.073826   68084 fix.go:56] duration metric: took 18.558388385s for fixHost
	I0829 20:26:40.073846   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:40.076397   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.076749   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:40.076789   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.076999   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:40.077200   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:40.077374   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:40.077480   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:40.077598   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:40.077754   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:40.077765   68084 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:40.175410   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963200.123461148
	
	I0829 20:26:40.175431   68084 fix.go:216] guest clock: 1724963200.123461148
	I0829 20:26:40.175437   68084 fix.go:229] Guest: 2024-08-29 20:26:40.123461148 +0000 UTC Remote: 2024-08-29 20:26:40.073830105 +0000 UTC m=+143.488576066 (delta=49.631043ms)
	I0829 20:26:40.175456   68084 fix.go:200] guest clock delta is within tolerance: 49.631043ms
	I0829 20:26:40.175463   68084 start.go:83] releasing machines lock for "default-k8s-diff-port-145096", held for 18.660059953s
	I0829 20:26:40.175497   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:40.175781   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetIP
	I0829 20:26:40.179031   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.179457   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:40.179495   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.179695   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:40.180256   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:40.180444   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:40.180528   68084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:40.180581   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:40.180706   68084 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:40.180729   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:40.183580   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.183819   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.183963   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:40.183989   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.184172   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:40.184174   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:40.184213   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.184345   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:40.184416   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:40.184511   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:40.184624   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:40.184626   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:40.184794   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:26:40.184896   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:26:40.259854   68084 ssh_runner.go:195] Run: systemctl --version
	I0829 20:26:40.290102   68084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:26:40.439112   68084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:26:40.449465   68084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:26:40.449546   68084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:26:40.471182   68084 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:26:40.471209   68084 start.go:495] detecting cgroup driver to use...
	I0829 20:26:40.471276   68084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:26:40.492605   68084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:26:40.508500   68084 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:26:40.508561   68084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:26:40.527534   68084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:26:40.542013   68084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:26:40.663843   68084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:26:40.837228   68084 docker.go:233] disabling docker service ...
	I0829 20:26:40.837293   68084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:26:40.854285   68084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:26:40.870148   68084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:26:41.017156   68084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:26:41.150436   68084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:26:41.165239   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:26:41.184783   68084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 20:26:41.184847   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.197358   68084 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:26:41.197417   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.211222   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.225297   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.237205   68084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:26:41.249875   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.261928   68084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.286145   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.299119   68084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:26:41.313001   68084 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:26:41.313062   68084 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:26:41.335390   68084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:26:41.348803   68084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:41.464387   68084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:26:41.564675   68084 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:26:41.564746   68084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:26:41.569620   68084 start.go:563] Will wait 60s for crictl version
	I0829 20:26:41.569680   68084 ssh_runner.go:195] Run: which crictl
	I0829 20:26:41.573519   68084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:26:41.615105   68084 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:26:41.615190   68084 ssh_runner.go:195] Run: crio --version
	I0829 20:26:41.644597   68084 ssh_runner.go:195] Run: crio --version
	I0829 20:26:41.678211   68084 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 20:26:39.248306   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:39.748958   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:40.248975   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:40.748948   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:41.249144   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:41.749013   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:42.248363   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:42.748624   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:43.248833   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:43.748535   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:40.200748   66841 main.go:141] libmachine: (no-preload-397724) Calling .Start
	I0829 20:26:40.200955   66841 main.go:141] libmachine: (no-preload-397724) Ensuring networks are active...
	I0829 20:26:40.201793   66841 main.go:141] libmachine: (no-preload-397724) Ensuring network default is active
	I0829 20:26:40.202128   66841 main.go:141] libmachine: (no-preload-397724) Ensuring network mk-no-preload-397724 is active
	I0829 20:26:40.202729   66841 main.go:141] libmachine: (no-preload-397724) Getting domain xml...
	I0829 20:26:40.203538   66841 main.go:141] libmachine: (no-preload-397724) Creating domain...
	I0829 20:26:41.516739   66841 main.go:141] libmachine: (no-preload-397724) Waiting to get IP...
	I0829 20:26:41.517840   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:41.518273   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:41.518353   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:41.518262   68926 retry.go:31] will retry after 295.070588ms: waiting for machine to come up
	I0829 20:26:41.814782   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:41.815346   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:41.815369   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:41.815291   68926 retry.go:31] will retry after 239.48527ms: waiting for machine to come up
	I0829 20:26:42.056957   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:42.057459   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:42.057509   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:42.057436   68926 retry.go:31] will retry after 452.012872ms: waiting for machine to come up
	I0829 20:26:42.511068   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:42.511551   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:42.511590   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:42.511520   68926 retry.go:31] will retry after 552.227159ms: waiting for machine to come up
	I0829 20:26:43.066096   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:43.066642   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:43.066673   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:43.066605   68926 retry.go:31] will retry after 666.699647ms: waiting for machine to come up
	I0829 20:26:43.734695   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:43.735402   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:43.735430   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:43.735309   68926 retry.go:31] will retry after 770.756485ms: waiting for machine to come up
	I0829 20:26:40.709553   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:42.712799   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:41.679441   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetIP
	I0829 20:26:41.682807   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:41.683205   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:41.683236   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:41.683489   68084 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0829 20:26:41.688766   68084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:41.705764   68084 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-145096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:26:41.705918   68084 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:26:41.705977   68084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:41.752884   68084 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 20:26:41.752955   68084 ssh_runner.go:195] Run: which lz4
	I0829 20:26:41.757600   68084 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 20:26:41.762158   68084 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 20:26:41.762188   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 20:26:43.201094   68084 crio.go:462] duration metric: took 1.443534343s to copy over tarball
	I0829 20:26:43.201176   68084 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 20:26:45.400911   68084 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.199703125s)
	I0829 20:26:45.400942   68084 crio.go:469] duration metric: took 2.199820098s to extract the tarball
	I0829 20:26:45.400948   68084 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 20:26:45.439120   68084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:45.482658   68084 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 20:26:45.482679   68084 cache_images.go:84] Images are preloaded, skipping loading
	I0829 20:26:45.482687   68084 kubeadm.go:934] updating node { 192.168.72.140 8444 v1.31.0 crio true true} ...
	I0829 20:26:45.482801   68084 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-145096 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:26:45.482873   68084 ssh_runner.go:195] Run: crio config
	I0829 20:26:45.532108   68084 cni.go:84] Creating CNI manager for ""
	I0829 20:26:45.532132   68084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:45.532146   68084 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:26:45.532169   68084 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.140 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-145096 NodeName:default-k8s-diff-port-145096 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 20:26:45.532310   68084 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.140
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-145096"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:26:45.532367   68084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 20:26:45.542670   68084 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:26:45.542744   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:26:45.552622   68084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0829 20:26:45.569765   68084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:26:45.590972   68084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0829 20:26:45.611421   68084 ssh_runner.go:195] Run: grep 192.168.72.140	control-plane.minikube.internal$ /etc/hosts
	I0829 20:26:45.615585   68084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:45.627911   68084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:45.757504   68084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:45.776103   68084 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096 for IP: 192.168.72.140
	I0829 20:26:45.776128   68084 certs.go:194] generating shared ca certs ...
	I0829 20:26:45.776159   68084 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:45.776337   68084 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:26:45.776388   68084 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:26:45.776400   68084 certs.go:256] generating profile certs ...
	I0829 20:26:45.776511   68084 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/client.key
	I0829 20:26:45.776600   68084 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/apiserver.key.5a49b6b2
	I0829 20:26:45.776650   68084 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/proxy-client.key
	I0829 20:26:45.776788   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:26:45.776827   68084 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:26:45.776840   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:26:45.776869   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:26:45.776940   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:26:45.776977   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:26:45.777035   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:45.777916   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:26:45.823419   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:26:45.868291   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:26:45.905178   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:26:45.934956   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0829 20:26:45.967570   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 20:26:45.994332   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:26:46.019268   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 20:26:46.044075   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:26:46.067906   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:26:46.092513   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:26:46.117686   68084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:26:46.137048   68084 ssh_runner.go:195] Run: openssl version
	I0829 20:26:46.143203   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:26:46.156407   68084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:46.161397   68084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:46.161461   68084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:46.167587   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:26:46.179034   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:26:46.190204   68084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:26:46.194953   68084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:26:46.195010   68084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:26:46.203121   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:26:46.218606   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:26:46.233586   68084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:26:46.240100   68084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:26:46.240155   68084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:26:46.247473   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:26:46.259417   68084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:26:46.264875   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:26:46.270914   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:26:46.277211   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:26:46.283138   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:26:46.289137   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:26:46.295044   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:26:46.301027   68084 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-145096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:26:46.301120   68084 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:26:46.301177   68084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:46.342913   68084 cri.go:89] found id: ""
	I0829 20:26:46.342988   68084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:26:46.354198   68084 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:26:46.354221   68084 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:26:46.354269   68084 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:26:46.364173   68084 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:26:46.365182   68084 kubeconfig.go:125] found "default-k8s-diff-port-145096" server: "https://192.168.72.140:8444"
	I0829 20:26:46.367560   68084 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:26:46.377550   68084 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.140
	I0829 20:26:46.377584   68084 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:26:46.377596   68084 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:26:46.377647   68084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:46.419141   68084 cri.go:89] found id: ""
	I0829 20:26:46.419215   68084 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:26:46.438037   68084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:26:46.449021   68084 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:26:46.449041   68084 kubeadm.go:157] found existing configuration files:
	
	I0829 20:26:46.449093   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0829 20:26:46.459396   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:26:46.459445   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:26:46.469964   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0829 20:26:46.479604   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:26:46.479655   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:26:46.492672   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0829 20:26:46.504656   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:26:46.504714   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:26:46.520206   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0829 20:26:46.532067   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:26:46.532137   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:26:46.541931   68084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:26:46.551973   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:44.248615   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:44.748528   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:45.248257   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:45.748453   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:46.248927   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:46.748628   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:47.248556   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:47.748332   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.248373   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.749111   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:44.507808   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:44.508340   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:44.508375   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:44.508288   68926 retry.go:31] will retry after 754.614285ms: waiting for machine to come up
	I0829 20:26:45.264587   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:45.265039   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:45.265065   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:45.265003   68926 retry.go:31] will retry after 1.3758308s: waiting for machine to come up
	I0829 20:26:46.642139   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:46.642666   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:46.642690   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:46.642612   68926 retry.go:31] will retry after 1.255043608s: waiting for machine to come up
	I0829 20:26:47.899849   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:47.900330   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:47.900360   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:47.900291   68926 retry.go:31] will retry after 1.517293529s: waiting for machine to come up
	I0829 20:26:45.208067   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:48.177040   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:46.668397   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:47.497182   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:47.725573   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:47.785427   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:47.850878   68084 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:26:47.850972   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.351404   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.852023   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.351402   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.367249   68084 api_server.go:72] duration metric: took 1.516370766s to wait for apiserver process to appear ...
	I0829 20:26:49.367283   68084 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:26:49.367312   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:51.595653   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:26:51.595683   68084 api_server.go:103] status: https://192.168.72.140:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:26:51.595698   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:51.609883   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:26:51.609989   68084 api_server.go:103] status: https://192.168.72.140:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:26:51.867454   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:51.872297   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:51.872328   68084 api_server.go:103] status: https://192.168.72.140:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:52.367462   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:52.375300   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:52.375333   68084 api_server.go:103] status: https://192.168.72.140:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:52.867827   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:52.872814   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 200:
	ok
	I0829 20:26:52.881061   68084 api_server.go:141] control plane version: v1.31.0
	I0829 20:26:52.881092   68084 api_server.go:131] duration metric: took 3.513801329s to wait for apiserver health ...
	I0829 20:26:52.881102   68084 cni.go:84] Creating CNI manager for ""
	I0829 20:26:52.881111   68084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:52.882993   68084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:26:49.248291   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.748360   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:50.248427   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:50.749087   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:51.248381   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:51.748488   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:52.249250   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:52.748715   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:53.249248   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:53.748915   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.419781   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:49.420286   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:49.420314   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:49.420244   68926 retry.go:31] will retry after 2.638145598s: waiting for machine to come up
	I0829 20:26:52.059935   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:52.060367   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:52.060411   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:52.060341   68926 retry.go:31] will retry after 2.696474949s: waiting for machine to come up
	I0829 20:26:50.207945   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:52.709407   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:52.884310   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:26:52.901134   68084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:26:52.931390   68084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:26:52.952109   68084 system_pods.go:59] 8 kube-system pods found
	I0829 20:26:52.952154   68084 system_pods.go:61] "coredns-6f6b679f8f-5mkxp" [1d3c3a01-1fa6-4d1d-8750-deef4475ba96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:26:52.952166   68084 system_pods.go:61] "etcd-default-k8s-diff-port-145096" [03096d69-48af-4372-9fa0-5a45dcb9603c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 20:26:52.952177   68084 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-145096" [4be8793a-7934-4c89-a840-49e769673f5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 20:26:52.952188   68084 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-145096" [a3bec7f8-8163-4afa-af53-282ad755b788] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 20:26:52.952202   68084 system_pods.go:61] "kube-proxy-b4ffx" [d97e74d5-21d4-4c96-9d94-77767fc4e609] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 20:26:52.952210   68084 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-145096" [c416b52b-ebf4-4714-bed6-3d25bfaa373c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 20:26:52.952217   68084 system_pods.go:61] "metrics-server-6867b74b74-5kk6q" [e74224b1-8242-4f7f-b8d6-7d9d4839be53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:26:52.952224   68084 system_pods.go:61] "storage-provisioner" [4e97da7c-af4b-40b3-83fb-82b6c2a2adef] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 20:26:52.952236   68084 system_pods.go:74] duration metric: took 20.81979ms to wait for pod list to return data ...
	I0829 20:26:52.952245   68084 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:26:52.961169   68084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:26:52.961202   68084 node_conditions.go:123] node cpu capacity is 2
	I0829 20:26:52.961214   68084 node_conditions.go:105] duration metric: took 8.963546ms to run NodePressure ...
	I0829 20:26:52.961234   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:53.425201   68084 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 20:26:53.429605   68084 kubeadm.go:739] kubelet initialised
	I0829 20:26:53.429625   68084 kubeadm.go:740] duration metric: took 4.401784ms waiting for restarted kubelet to initialise ...
	I0829 20:26:53.429632   68084 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:26:53.434501   68084 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-5mkxp" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:55.442290   68084 pod_ready.go:103] pod "coredns-6f6b679f8f-5mkxp" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:54.248998   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:54.748438   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:55.249066   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:55.749293   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:56.248457   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:56.748509   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:57.248949   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:57.748228   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:58.248717   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:58.748412   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:54.760175   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:54.760689   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:54.760736   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:54.760667   68926 retry.go:31] will retry after 3.651969786s: waiting for machine to come up
	I0829 20:26:58.415601   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.416019   66841 main.go:141] libmachine: (no-preload-397724) Found IP for machine: 192.168.50.214
	I0829 20:26:58.416045   66841 main.go:141] libmachine: (no-preload-397724) Reserving static IP address...
	I0829 20:26:58.416063   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has current primary IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.416507   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "no-preload-397724", mac: "52:54:00:e9:bf:ac", ip: "192.168.50.214"} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.416533   66841 main.go:141] libmachine: (no-preload-397724) DBG | skip adding static IP to network mk-no-preload-397724 - found existing host DHCP lease matching {name: "no-preload-397724", mac: "52:54:00:e9:bf:ac", ip: "192.168.50.214"}
	I0829 20:26:58.416543   66841 main.go:141] libmachine: (no-preload-397724) Reserved static IP address: 192.168.50.214
	I0829 20:26:58.416552   66841 main.go:141] libmachine: (no-preload-397724) Waiting for SSH to be available...
	I0829 20:26:58.416562   66841 main.go:141] libmachine: (no-preload-397724) DBG | Getting to WaitForSSH function...
	I0829 20:26:58.418849   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.419170   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.419199   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.419312   66841 main.go:141] libmachine: (no-preload-397724) DBG | Using SSH client type: external
	I0829 20:26:58.419351   66841 main.go:141] libmachine: (no-preload-397724) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa (-rw-------)
	I0829 20:26:58.419397   66841 main.go:141] libmachine: (no-preload-397724) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:58.419414   66841 main.go:141] libmachine: (no-preload-397724) DBG | About to run SSH command:
	I0829 20:26:58.419444   66841 main.go:141] libmachine: (no-preload-397724) DBG | exit 0
	I0829 20:26:58.542594   66841 main.go:141] libmachine: (no-preload-397724) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:58.542925   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetConfigRaw
	I0829 20:26:58.543582   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetIP
	I0829 20:26:58.546057   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.546384   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.546422   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.546691   66841 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/config.json ...
	I0829 20:26:58.546871   66841 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:58.546890   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:58.547113   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:58.549493   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.549816   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.549854   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.549972   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:58.550140   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.550260   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.550388   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:58.550581   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:58.550805   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:58.550822   66841 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:58.658784   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:58.658827   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:26:58.659063   66841 buildroot.go:166] provisioning hostname "no-preload-397724"
	I0829 20:26:58.659083   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:26:58.659220   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:58.661932   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.662294   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.662320   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.662485   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:58.662695   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.662880   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.663011   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:58.663168   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:58.663343   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:58.663356   66841 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-397724 && echo "no-preload-397724" | sudo tee /etc/hostname
	I0829 20:26:58.790591   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-397724
	
	I0829 20:26:58.790618   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:58.793294   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.793612   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.793639   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.793849   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:58.794035   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.794192   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.794289   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:58.794430   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:58.794656   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:58.794678   66841 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-397724' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-397724/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-397724' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:58.915925   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:58.915958   66841 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:58.915981   66841 buildroot.go:174] setting up certificates
	I0829 20:26:58.915991   66841 provision.go:84] configureAuth start
	I0829 20:26:58.916000   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:26:58.916279   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetIP
	I0829 20:26:58.919034   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.919385   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.919415   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.919523   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:58.921483   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.921805   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.921831   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.922015   66841 provision.go:143] copyHostCerts
	I0829 20:26:58.922062   66841 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:58.922079   66841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:58.922135   66841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:58.922242   66841 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:58.922256   66841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:58.922288   66841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:58.922365   66841 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:58.922375   66841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:58.922400   66841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:58.922491   66841 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.no-preload-397724 san=[127.0.0.1 192.168.50.214 localhost minikube no-preload-397724]
	I0829 20:26:55.206462   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:57.207175   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:59.207454   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:59.264390   66841 provision.go:177] copyRemoteCerts
	I0829 20:26:59.264446   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:59.264467   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.267259   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.267603   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.267626   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.267794   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.268014   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.268190   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.268367   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:26:59.353746   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:59.378289   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0829 20:26:59.402330   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 20:26:59.425412   66841 provision.go:87] duration metric: took 509.408381ms to configureAuth
	I0829 20:26:59.425442   66841 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:59.425616   66841 config.go:182] Loaded profile config "no-preload-397724": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:26:59.425679   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.428148   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.428503   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.428545   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.428698   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.428906   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.429077   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.429227   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.429365   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:59.429511   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:59.429524   66841 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:59.666382   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:59.666408   66841 machine.go:96] duration metric: took 1.11952301s to provisionDockerMachine
	I0829 20:26:59.666422   66841 start.go:293] postStartSetup for "no-preload-397724" (driver="kvm2")
	I0829 20:26:59.666436   66841 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:59.666458   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.666833   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:59.666881   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.669407   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.669725   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.669751   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.669888   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.670073   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.670214   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.670316   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:26:59.753440   66841 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:59.758408   66841 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:59.758431   66841 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:59.758509   66841 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:59.758632   66841 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:59.758753   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:59.768355   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:59.792742   66841 start.go:296] duration metric: took 126.308201ms for postStartSetup
	I0829 20:26:59.792782   66841 fix.go:56] duration metric: took 19.617155195s for fixHost
	I0829 20:26:59.792806   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.795380   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.795744   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.795781   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.795917   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.796124   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.796237   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.796376   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.796488   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:59.796668   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:59.796680   66841 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:59.903539   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963219.868600963
	
	I0829 20:26:59.903564   66841 fix.go:216] guest clock: 1724963219.868600963
	I0829 20:26:59.903574   66841 fix.go:229] Guest: 2024-08-29 20:26:59.868600963 +0000 UTC Remote: 2024-08-29 20:26:59.792787483 +0000 UTC m=+355.719318860 (delta=75.81348ms)
	I0829 20:26:59.903623   66841 fix.go:200] guest clock delta is within tolerance: 75.81348ms
	I0829 20:26:59.903632   66841 start.go:83] releasing machines lock for "no-preload-397724", held for 19.728042303s
	I0829 20:26:59.903676   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.903967   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetIP
	I0829 20:26:59.906798   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.907183   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.907212   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.907378   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.907804   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.907970   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.908038   66841 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:59.908072   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.908324   66841 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:59.908346   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.910843   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.911025   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.911187   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.911215   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.911325   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.911415   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.911437   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.911485   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.911640   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.911649   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.911847   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.911848   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:26:59.911978   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.912119   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:27:00.023116   66841 ssh_runner.go:195] Run: systemctl --version
	I0829 20:27:00.029346   66841 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:27:00.169122   66841 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:27:00.176823   66841 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:27:00.176913   66841 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:27:00.194795   66841 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:27:00.194836   66841 start.go:495] detecting cgroup driver to use...
	I0829 20:27:00.194906   66841 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:27:00.212145   66841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:27:00.226584   66841 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:27:00.226656   66841 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:27:00.240525   66841 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:27:00.256847   66841 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:27:00.371938   66841 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:27:00.516891   66841 docker.go:233] disabling docker service ...
	I0829 20:27:00.516964   66841 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:27:00.531127   66841 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:27:00.543483   66841 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:27:00.672033   66841 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:27:00.794828   66841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:27:00.809204   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:27:00.828484   66841 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 20:27:00.828547   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.839273   66841 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:27:00.839344   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.850336   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.860980   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.871661   66841 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:27:00.884343   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.895190   66841 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.912700   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.923383   66841 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:27:00.934168   66841 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:27:00.934231   66841 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:27:00.948181   66841 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:27:00.959121   66841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:27:01.072055   66841 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:27:01.163024   66841 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:27:01.163104   66841 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:27:01.167949   66841 start.go:563] Will wait 60s for crictl version
	I0829 20:27:01.168011   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.171707   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:27:01.212950   66841 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:27:01.213031   66841 ssh_runner.go:195] Run: crio --version
	I0829 20:27:01.242181   66841 ssh_runner.go:195] Run: crio --version
	I0829 20:27:01.276389   66841 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 20:26:57.441729   68084 pod_ready.go:93] pod "coredns-6f6b679f8f-5mkxp" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:57.441753   68084 pod_ready.go:82] duration metric: took 4.007206558s for pod "coredns-6f6b679f8f-5mkxp" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:57.441762   68084 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:59.448210   68084 pod_ready.go:103] pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:59.248692   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:59.748815   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:00.248257   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:00.748264   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:01.249241   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:01.748894   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:02.249045   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:02.748765   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:03.248902   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:03.748333   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:01.277829   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetIP
	I0829 20:27:01.280762   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:27:01.281144   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:27:01.281171   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:27:01.281367   66841 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0829 20:27:01.285714   66841 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:27:01.297903   66841 kubeadm.go:883] updating cluster {Name:no-preload-397724 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-397724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:27:01.298010   66841 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:27:01.298041   66841 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:27:01.331474   66841 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 20:27:01.331498   66841 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 20:27:01.331566   66841 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:01.331572   66841 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.331609   66841 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.331632   66841 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.331643   66841 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.331615   66841 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0829 20:27:01.331737   66841 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.331758   66841 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.333182   66841 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.333233   66841 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.333206   66841 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.333195   66841 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.333191   66841 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:01.333278   66841 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.333191   66841 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.333333   66841 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0829 20:27:01.507028   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.514096   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.526653   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.530292   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.531828   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.534432   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.550465   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0829 20:27:01.613161   66841 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0829 20:27:01.613209   66841 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.613287   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.631193   66841 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0829 20:27:01.631236   66841 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.631285   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.687868   66841 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0829 20:27:01.687911   66841 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.687967   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.700369   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:01.713036   66841 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0829 20:27:01.713102   66841 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.713159   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.722934   66841 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0829 20:27:01.722991   66841 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.723042   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.722941   66841 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0829 20:27:01.723130   66841 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.723159   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.785242   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.785246   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.785342   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.785391   66841 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0829 20:27:01.785438   66841 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:01.785450   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.785474   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.785479   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.785534   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.925322   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.925371   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.925374   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.925474   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.925518   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.925569   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.925593   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:02.072628   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:02.072690   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:02.072744   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:02.072822   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:02.072867   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:02.176999   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0829 20:27:02.177031   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:02.177503   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:02.177507   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 20:27:02.177572   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0829 20:27:02.177581   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0829 20:27:02.177678   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0829 20:27:02.177682   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 20:27:02.185515   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0829 20:27:02.185585   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:02.185624   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0829 20:27:02.259015   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0829 20:27:02.259076   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0829 20:27:02.259087   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0829 20:27:02.259106   66841 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 20:27:02.259113   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0829 20:27:02.259138   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0829 20:27:02.259147   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 20:27:02.259155   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 20:27:02.259152   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0829 20:27:02.259139   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0829 20:27:02.259157   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 20:27:02.259240   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0829 20:27:01.208076   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:03.208339   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:01.954153   68084 pod_ready.go:103] pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:03.454991   68084 pod_ready.go:93] pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:03.455023   68084 pod_ready.go:82] duration metric: took 6.013253793s for pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:03.455036   68084 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:05.461938   68084 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:04.249082   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:04.748738   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:05.248398   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:05.749056   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:06.248693   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:06.748904   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:07.249145   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:07.749131   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:08.248774   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:08.748444   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:04.630344   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.371149915s)
	I0829 20:27:04.630373   66841 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0: (2.371188324s)
	I0829 20:27:04.630410   66841 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.371191825s)
	I0829 20:27:04.630432   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0829 20:27:04.630413   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0829 20:27:04.630379   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0829 20:27:04.630465   66841 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.371187188s)
	I0829 20:27:04.630478   66841 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 20:27:04.630481   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0829 20:27:04.630561   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 20:27:06.684986   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.054398317s)
	I0829 20:27:06.685019   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0829 20:27:06.685047   66841 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0829 20:27:06.685098   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0829 20:27:05.707657   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:07.708034   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:06.965873   68084 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:06.965904   68084 pod_ready.go:82] duration metric: took 3.51085868s for pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.965918   68084 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.976464   68084 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:06.976489   68084 pod_ready.go:82] duration metric: took 10.562771ms for pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.976502   68084 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b4ffx" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.982178   68084 pod_ready.go:93] pod "kube-proxy-b4ffx" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:06.982197   68084 pod_ready.go:82] duration metric: took 5.687889ms for pod "kube-proxy-b4ffx" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.982205   68084 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.987316   68084 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:06.987333   68084 pod_ready.go:82] duration metric: took 5.122275ms for pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.987342   68084 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:08.994794   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:11.493940   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:09.248746   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:09.748722   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:10.249074   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:10.748647   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:11.248236   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:11.749057   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:12.249227   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:12.748688   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:13.249248   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:13.749298   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:10.365120   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.679993065s)
	I0829 20:27:10.365150   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0829 20:27:10.365182   66841 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0829 20:27:10.365256   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0829 20:27:12.122371   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.757087653s)
	I0829 20:27:12.122409   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0829 20:27:12.122434   66841 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 20:27:12.122564   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 20:27:13.575108   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.45251018s)
	I0829 20:27:13.575137   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0829 20:27:13.575165   66841 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 20:27:13.575210   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 20:27:09.708364   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:11.708491   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:14.207383   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:13.494124   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:15.993564   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:14.249254   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:14.748957   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:15.249229   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:15.749137   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:16.248967   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:16.748254   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:17.248929   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:17.748339   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:18.248666   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:18.748712   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:15.742286   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.16705417s)
	I0829 20:27:15.742320   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0829 20:27:15.742348   66841 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0829 20:27:15.742398   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0829 20:27:16.391977   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0829 20:27:16.392017   66841 cache_images.go:123] Successfully loaded all cached images
	I0829 20:27:16.392022   66841 cache_images.go:92] duration metric: took 15.060512795s to LoadCachedImages
	I0829 20:27:16.392034   66841 kubeadm.go:934] updating node { 192.168.50.214 8443 v1.31.0 crio true true} ...
	I0829 20:27:16.392139   66841 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-397724 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-397724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:27:16.392203   66841 ssh_runner.go:195] Run: crio config
	I0829 20:27:16.445382   66841 cni.go:84] Creating CNI manager for ""
	I0829 20:27:16.445406   66841 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:27:16.445420   66841 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:27:16.445448   66841 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.214 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-397724 NodeName:no-preload-397724 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 20:27:16.445612   66841 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-397724"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:27:16.445671   66841 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 20:27:16.456505   66841 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:27:16.456560   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:27:16.467361   66841 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0829 20:27:16.484700   66841 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:27:16.503026   66841 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0829 20:27:16.519867   66841 ssh_runner.go:195] Run: grep 192.168.50.214	control-plane.minikube.internal$ /etc/hosts
	I0829 20:27:16.523648   66841 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:27:16.535642   66841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:27:16.671027   66841 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:27:16.688692   66841 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724 for IP: 192.168.50.214
	I0829 20:27:16.688712   66841 certs.go:194] generating shared ca certs ...
	I0829 20:27:16.688727   66841 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:27:16.688883   66841 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:27:16.688944   66841 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:27:16.688957   66841 certs.go:256] generating profile certs ...
	I0829 20:27:16.689053   66841 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/client.key
	I0829 20:27:16.689132   66841 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/apiserver.key.1f535ae9
	I0829 20:27:16.689182   66841 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/proxy-client.key
	I0829 20:27:16.689360   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:27:16.689400   66841 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:27:16.689415   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:27:16.689450   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:27:16.689504   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:27:16.689540   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:27:16.689596   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:27:16.690277   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:27:16.747582   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:27:16.782064   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:27:16.816382   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:27:16.851548   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0829 20:27:16.882919   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 20:27:16.907439   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:27:16.932392   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 20:27:16.957451   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:27:16.982482   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:27:17.006032   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:27:17.030052   66841 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:27:17.047792   66841 ssh_runner.go:195] Run: openssl version
	I0829 20:27:17.053922   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:27:17.065219   66841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:27:17.069592   66841 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:27:17.069647   66841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:27:17.075853   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:27:17.086727   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:27:17.097935   66841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:27:17.102198   66841 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:27:17.102252   66841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:27:17.108031   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:27:17.119868   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:27:17.131513   66841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:27:17.136434   66841 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:27:17.136497   66841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:27:17.142219   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:27:17.153448   66841 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:27:17.158375   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:27:17.165156   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:27:17.170927   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:27:17.176669   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:27:17.182293   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:27:17.187936   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:27:17.193572   66841 kubeadm.go:392] StartCluster: {Name:no-preload-397724 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-397724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:27:17.193682   66841 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:27:17.193754   66841 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:27:17.238327   66841 cri.go:89] found id: ""
	I0829 20:27:17.238392   66841 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:27:17.248923   66841 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:27:17.248943   66841 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:27:17.248984   66841 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:27:17.263143   66841 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:27:17.264260   66841 kubeconfig.go:125] found "no-preload-397724" server: "https://192.168.50.214:8443"
	I0829 20:27:17.266448   66841 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:27:17.276347   66841 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.214
	I0829 20:27:17.276378   66841 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:27:17.276389   66841 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:27:17.276440   66841 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:27:17.311409   66841 cri.go:89] found id: ""
	I0829 20:27:17.311476   66841 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:27:17.329204   66841 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:27:17.339063   66841 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:27:17.339079   66841 kubeadm.go:157] found existing configuration files:
	
	I0829 20:27:17.339118   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:27:17.348268   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:27:17.348324   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:27:17.357596   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:27:17.366504   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:27:17.366575   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:27:17.376068   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:27:17.385156   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:27:17.385220   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:27:17.394890   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:27:17.404213   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:27:17.404283   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:27:17.413669   66841 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:27:17.423307   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:17.536003   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:17.990605   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:18.217809   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:18.297100   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:18.421185   66841 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:27:18.421283   66841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:18.922043   66841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:16.209618   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:18.707544   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:17.993609   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:19.994469   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:19.248924   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:19.748958   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:20.248851   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:20.748547   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:21.248298   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:21.748802   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:22.248680   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:22.748271   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:23.248491   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:23.748803   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:19.422030   66841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:19.442023   66841 api_server.go:72] duration metric: took 1.020839747s to wait for apiserver process to appear ...
	I0829 20:27:19.442047   66841 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:27:19.442070   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:22.444156   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:27:22.444192   66841 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:27:22.444211   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:22.466228   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:27:22.466258   66841 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:27:22.942835   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:22.949338   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:27:22.949360   66841 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:27:23.443069   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:23.447845   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:27:23.447876   66841 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:27:23.942372   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:23.946517   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 200:
	ok
	I0829 20:27:23.953497   66841 api_server.go:141] control plane version: v1.31.0
	I0829 20:27:23.953522   66841 api_server.go:131] duration metric: took 4.511467637s to wait for apiserver health ...
	I0829 20:27:23.953530   66841 cni.go:84] Creating CNI manager for ""
	I0829 20:27:23.953536   66841 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:27:23.955180   66841 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:27:23.956396   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:27:23.969429   66841 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:27:24.000989   66841 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:27:24.014200   66841 system_pods.go:59] 8 kube-system pods found
	I0829 20:27:24.014233   66841 system_pods.go:61] "coredns-6f6b679f8f-g7xxs" [f0148527-2146-4153-aa20-5ac97b664027] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:27:24.014240   66841 system_pods.go:61] "etcd-no-preload-397724" [f04b5ee4-f439-470a-b298-1a9ed569db70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 20:27:24.014248   66841 system_pods.go:61] "kube-apiserver-no-preload-397724" [2328f327-1744-4785-9266-3f992b977ef8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 20:27:24.014254   66841 system_pods.go:61] "kube-controller-manager-no-preload-397724" [0e63f04d-8627-45e9-ac80-70a0fe63f5db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 20:27:24.014260   66841 system_pods.go:61] "kube-proxy-57kbt" [9f85ce17-85a0-4a52-bdaf-4e3aee4d1a98] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 20:27:24.014267   66841 system_pods.go:61] "kube-scheduler-no-preload-397724" [106821c6-2444-470a-bac1-78838c0b1982] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 20:27:24.014273   66841 system_pods.go:61] "metrics-server-6867b74b74-668dg" [e3f3ab24-7777-40b0-a54c-00a294e7e68e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:27:24.014280   66841 system_pods.go:61] "storage-provisioner" [146bd02a-8f50-4d19-a188-4adc2bcc0a43] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 20:27:24.014288   66841 system_pods.go:74] duration metric: took 13.275941ms to wait for pod list to return data ...
	I0829 20:27:24.014298   66841 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:27:24.018932   66841 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:27:24.018956   66841 node_conditions.go:123] node cpu capacity is 2
	I0829 20:27:24.018966   66841 node_conditions.go:105] duration metric: took 4.661993ms to run NodePressure ...
	I0829 20:27:24.018981   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:21.207144   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:23.208728   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:22.493988   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:24.494152   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:24.248456   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:24.748347   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:25.248337   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:25.748905   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:26.248912   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:26.749302   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:27.249058   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:27.749105   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:28.248548   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:28.748298   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:24.305237   66841 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 20:27:24.310640   66841 kubeadm.go:739] kubelet initialised
	I0829 20:27:24.310666   66841 kubeadm.go:740] duration metric: took 5.402212ms waiting for restarted kubelet to initialise ...
	I0829 20:27:24.310679   66841 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:27:24.316568   66841 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:26.325035   66841 pod_ready.go:103] pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:28.336627   66841 pod_ready.go:103] pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:25.706496   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:27.708228   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:26.992949   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:28.993682   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:30.993877   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:29.248994   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:29.749020   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:30.248983   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:30.748247   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:31.249052   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:31.249133   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:31.293442   67607 cri.go:89] found id: ""
	I0829 20:27:31.293466   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.293473   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:31.293479   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:31.293527   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:31.333976   67607 cri.go:89] found id: ""
	I0829 20:27:31.333999   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.334006   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:31.334011   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:31.334055   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:31.373680   67607 cri.go:89] found id: ""
	I0829 20:27:31.373707   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.373715   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:31.373720   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:31.373766   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:31.407798   67607 cri.go:89] found id: ""
	I0829 20:27:31.407824   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.407832   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:31.407837   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:31.407893   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:31.444409   67607 cri.go:89] found id: ""
	I0829 20:27:31.444437   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.444445   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:31.444451   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:31.444512   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:31.479313   67607 cri.go:89] found id: ""
	I0829 20:27:31.479333   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.479341   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:31.479347   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:31.479403   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:31.516056   67607 cri.go:89] found id: ""
	I0829 20:27:31.516089   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.516100   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:31.516108   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:31.516168   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:31.555324   67607 cri.go:89] found id: ""
	I0829 20:27:31.555349   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.555357   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:31.555365   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:31.555375   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:31.626397   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:31.626434   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:31.672006   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:31.672038   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:31.724691   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:31.724727   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:31.740283   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:31.740324   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:31.874007   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:29.824509   66841 pod_ready.go:93] pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:29.824530   66841 pod_ready.go:82] duration metric: took 5.507939145s for pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:29.824547   66841 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:31.833646   66841 pod_ready.go:103] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:30.207213   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:32.706352   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:32.993932   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:35.494511   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:34.374203   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:34.387817   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:34.387888   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:34.423254   67607 cri.go:89] found id: ""
	I0829 20:27:34.423279   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.423286   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:34.423296   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:34.423343   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:34.457741   67607 cri.go:89] found id: ""
	I0829 20:27:34.457768   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.457775   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:34.457781   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:34.457827   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:34.498432   67607 cri.go:89] found id: ""
	I0829 20:27:34.498457   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.498464   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:34.498469   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:34.498523   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:34.534290   67607 cri.go:89] found id: ""
	I0829 20:27:34.534317   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.534324   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:34.534330   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:34.534380   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:34.570878   67607 cri.go:89] found id: ""
	I0829 20:27:34.570909   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.570919   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:34.570928   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:34.570986   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:34.615735   67607 cri.go:89] found id: ""
	I0829 20:27:34.615762   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.615769   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:34.615775   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:34.615824   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:34.656667   67607 cri.go:89] found id: ""
	I0829 20:27:34.656706   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.656721   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:34.656730   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:34.656779   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:34.708906   67607 cri.go:89] found id: ""
	I0829 20:27:34.708928   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.708937   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:34.708947   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:34.708962   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:34.767382   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:34.767417   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:34.786523   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:34.786574   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:34.872832   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:34.872857   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:34.872871   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:34.954581   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:34.954620   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:37.497810   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:37.511479   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:37.511539   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:37.547930   67607 cri.go:89] found id: ""
	I0829 20:27:37.547962   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.547972   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:37.547980   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:37.548035   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:37.585281   67607 cri.go:89] found id: ""
	I0829 20:27:37.585304   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.585312   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:37.585318   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:37.585365   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:37.622201   67607 cri.go:89] found id: ""
	I0829 20:27:37.622229   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.622241   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:37.622246   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:37.622295   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:37.657248   67607 cri.go:89] found id: ""
	I0829 20:27:37.657274   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.657281   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:37.657289   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:37.657335   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:37.691674   67607 cri.go:89] found id: ""
	I0829 20:27:37.691703   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.691711   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:37.691716   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:37.691764   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:37.729523   67607 cri.go:89] found id: ""
	I0829 20:27:37.729548   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.729557   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:37.729562   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:37.729609   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:37.764601   67607 cri.go:89] found id: ""
	I0829 20:27:37.764629   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.764637   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:37.764643   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:37.764705   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:37.799228   67607 cri.go:89] found id: ""
	I0829 20:27:37.799259   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.799270   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:37.799281   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:37.799301   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:37.848128   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:37.848158   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:37.862610   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:37.862640   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:37.936859   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:37.936888   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:37.936903   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:38.013647   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:38.013681   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:34.331889   66841 pod_ready.go:103] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:36.332334   66841 pod_ready.go:103] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:37.329545   66841 pod_ready.go:93] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.329566   66841 pod_ready.go:82] duration metric: took 7.50501178s for pod "etcd-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.329576   66841 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.333442   66841 pod_ready.go:93] pod "kube-apiserver-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.333458   66841 pod_ready.go:82] duration metric: took 3.876755ms for pod "kube-apiserver-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.333467   66841 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.336952   66841 pod_ready.go:93] pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.336968   66841 pod_ready.go:82] duration metric: took 3.49531ms for pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.336976   66841 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-57kbt" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.340368   66841 pod_ready.go:93] pod "kube-proxy-57kbt" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.340383   66841 pod_ready.go:82] duration metric: took 3.401844ms for pod "kube-proxy-57kbt" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.340396   66841 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.344111   66841 pod_ready.go:93] pod "kube-scheduler-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.344125   66841 pod_ready.go:82] duration metric: took 3.723924ms for pod "kube-scheduler-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.344132   66841 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:34.708682   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:37.206876   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:37.997827   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:40.494840   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:40.551395   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:40.568100   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:40.568181   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:40.616582   67607 cri.go:89] found id: ""
	I0829 20:27:40.616611   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.616623   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:40.616631   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:40.616695   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:40.690580   67607 cri.go:89] found id: ""
	I0829 20:27:40.690620   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.690631   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:40.690638   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:40.690695   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:40.733624   67607 cri.go:89] found id: ""
	I0829 20:27:40.733653   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.733662   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:40.733670   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:40.733733   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:40.767499   67607 cri.go:89] found id: ""
	I0829 20:27:40.767528   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.767538   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:40.767546   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:40.767619   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:40.806973   67607 cri.go:89] found id: ""
	I0829 20:27:40.807002   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.807009   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:40.807015   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:40.807079   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:40.842311   67607 cri.go:89] found id: ""
	I0829 20:27:40.842334   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.842341   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:40.842347   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:40.842401   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:40.880208   67607 cri.go:89] found id: ""
	I0829 20:27:40.880238   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.880248   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:40.880255   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:40.880309   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:40.918395   67607 cri.go:89] found id: ""
	I0829 20:27:40.918424   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.918435   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:40.918445   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:40.918459   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:40.972396   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:40.972437   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:40.986136   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:40.986169   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:41.064600   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:41.064623   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:41.064634   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:41.146653   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:41.146687   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:43.687773   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:43.701576   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:43.701645   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:43.737259   67607 cri.go:89] found id: ""
	I0829 20:27:43.737282   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.737289   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:43.737299   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:43.737346   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:43.772678   67607 cri.go:89] found id: ""
	I0829 20:27:43.772702   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.772709   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:43.772714   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:43.772776   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:43.806788   67607 cri.go:89] found id: ""
	I0829 20:27:43.806821   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.806831   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:43.806839   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:43.806900   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:39.350484   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:41.352279   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:43.850564   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:39.707977   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:42.207630   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:42.993571   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:44.994696   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:43.841738   67607 cri.go:89] found id: ""
	I0829 20:27:43.841759   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.841767   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:43.841772   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:43.841829   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:43.878420   67607 cri.go:89] found id: ""
	I0829 20:27:43.878449   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.878459   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:43.878466   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:43.878527   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:43.914307   67607 cri.go:89] found id: ""
	I0829 20:27:43.914335   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.914345   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:43.914352   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:43.914413   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:43.958827   67607 cri.go:89] found id: ""
	I0829 20:27:43.958853   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.958865   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:43.958871   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:43.958935   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:43.997397   67607 cri.go:89] found id: ""
	I0829 20:27:43.997423   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.997432   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:43.997442   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:43.997455   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:44.049245   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:44.049280   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:44.063473   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:44.063511   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:44.131628   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:44.131651   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:44.131666   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:44.210826   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:44.210854   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:46.754905   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:46.769531   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:46.769588   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:46.805245   67607 cri.go:89] found id: ""
	I0829 20:27:46.805272   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.805280   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:46.805285   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:46.805338   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:46.843606   67607 cri.go:89] found id: ""
	I0829 20:27:46.843637   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.843646   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:46.843654   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:46.843710   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:46.880300   67607 cri.go:89] found id: ""
	I0829 20:27:46.880326   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.880333   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:46.880338   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:46.880387   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:46.923537   67607 cri.go:89] found id: ""
	I0829 20:27:46.923562   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.923569   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:46.923574   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:46.923620   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:46.957774   67607 cri.go:89] found id: ""
	I0829 20:27:46.957806   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.957817   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:46.957826   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:46.957887   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:46.996972   67607 cri.go:89] found id: ""
	I0829 20:27:46.996995   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.997005   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:46.997013   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:46.997056   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:47.030560   67607 cri.go:89] found id: ""
	I0829 20:27:47.030588   67607 logs.go:276] 0 containers: []
	W0829 20:27:47.030606   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:47.030612   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:47.030665   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:47.068654   67607 cri.go:89] found id: ""
	I0829 20:27:47.068678   67607 logs.go:276] 0 containers: []
	W0829 20:27:47.068686   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:47.068694   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:47.068706   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:47.082335   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:47.082367   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:47.162792   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:47.162817   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:47.162829   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:47.241456   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:47.241491   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:47.282249   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:47.282274   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:45.850673   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:47.850836   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:44.707198   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:46.707222   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:49.207556   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:46.995302   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:49.498812   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:49.836268   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:49.850415   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:49.850491   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:49.887816   67607 cri.go:89] found id: ""
	I0829 20:27:49.887843   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.887851   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:49.887856   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:49.887916   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:49.923701   67607 cri.go:89] found id: ""
	I0829 20:27:49.923735   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.923745   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:49.923755   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:49.923818   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:49.958197   67607 cri.go:89] found id: ""
	I0829 20:27:49.958225   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.958236   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:49.958244   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:49.958313   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:49.995333   67607 cri.go:89] found id: ""
	I0829 20:27:49.995361   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.995373   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:49.995380   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:49.995439   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:50.034345   67607 cri.go:89] found id: ""
	I0829 20:27:50.034375   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.034382   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:50.034387   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:50.034438   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:50.070324   67607 cri.go:89] found id: ""
	I0829 20:27:50.070355   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.070365   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:50.070374   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:50.070434   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:50.107301   67607 cri.go:89] found id: ""
	I0829 20:27:50.107326   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.107334   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:50.107340   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:50.107400   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:50.144748   67607 cri.go:89] found id: ""
	I0829 20:27:50.144778   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.144788   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:50.144800   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:50.144816   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:50.183576   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:50.183606   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:50.236716   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:50.236750   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:50.251589   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:50.251612   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:50.317816   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:50.317840   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:50.317855   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:52.894572   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:52.908081   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:52.908149   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:52.945272   67607 cri.go:89] found id: ""
	I0829 20:27:52.945299   67607 logs.go:276] 0 containers: []
	W0829 20:27:52.945309   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:52.945317   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:52.945377   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:52.980237   67607 cri.go:89] found id: ""
	I0829 20:27:52.980262   67607 logs.go:276] 0 containers: []
	W0829 20:27:52.980270   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:52.980275   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:52.980325   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:53.017894   67607 cri.go:89] found id: ""
	I0829 20:27:53.017922   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.017929   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:53.017935   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:53.017991   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:53.052577   67607 cri.go:89] found id: ""
	I0829 20:27:53.052603   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.052611   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:53.052616   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:53.052667   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:53.093414   67607 cri.go:89] found id: ""
	I0829 20:27:53.093444   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.093455   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:53.093462   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:53.093523   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:53.130794   67607 cri.go:89] found id: ""
	I0829 20:27:53.130825   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.130837   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:53.130845   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:53.130902   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:53.163793   67607 cri.go:89] found id: ""
	I0829 20:27:53.163819   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.163827   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:53.163832   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:53.163882   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:53.204824   67607 cri.go:89] found id: ""
	I0829 20:27:53.204852   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.204862   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:53.204872   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:53.204885   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:53.243411   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:53.243440   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:53.296611   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:53.296642   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:53.310909   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:53.310943   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:53.385768   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:53.385790   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:53.385801   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:49.851712   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:52.350295   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:51.711115   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:54.207340   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:51.993943   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:53.996334   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:56.494226   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:55.966801   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:55.980852   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:55.980933   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:56.017682   67607 cri.go:89] found id: ""
	I0829 20:27:56.017707   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.017716   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:56.017722   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:56.017767   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:56.051556   67607 cri.go:89] found id: ""
	I0829 20:27:56.051584   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.051594   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:56.051600   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:56.051665   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:56.095301   67607 cri.go:89] found id: ""
	I0829 20:27:56.095330   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.095340   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:56.095348   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:56.095408   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:56.131161   67607 cri.go:89] found id: ""
	I0829 20:27:56.131195   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.131205   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:56.131213   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:56.131269   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:56.166611   67607 cri.go:89] found id: ""
	I0829 20:27:56.166637   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.166645   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:56.166651   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:56.166713   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:56.202818   67607 cri.go:89] found id: ""
	I0829 20:27:56.202846   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.202856   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:56.202864   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:56.202923   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:56.237855   67607 cri.go:89] found id: ""
	I0829 20:27:56.237883   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.237891   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:56.237897   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:56.237955   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:56.272402   67607 cri.go:89] found id: ""
	I0829 20:27:56.272426   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.272433   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:56.272441   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:56.272452   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:56.351628   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:56.351653   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:56.389525   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:56.389559   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:56.444952   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:56.444989   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:56.459731   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:56.459759   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:56.536888   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:54.350358   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:56.350727   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:58.352884   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:56.208050   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:58.706897   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:58.993153   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:00.993544   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:59.037744   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:59.051868   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:59.051938   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:59.087436   67607 cri.go:89] found id: ""
	I0829 20:27:59.087461   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.087467   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:59.087474   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:59.087531   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:59.123729   67607 cri.go:89] found id: ""
	I0829 20:27:59.123757   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.123765   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:59.123771   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:59.123825   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:59.168649   67607 cri.go:89] found id: ""
	I0829 20:27:59.168682   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.168690   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:59.168696   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:59.168753   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:59.209770   67607 cri.go:89] found id: ""
	I0829 20:27:59.209791   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.209803   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:59.209808   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:59.209854   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:59.248358   67607 cri.go:89] found id: ""
	I0829 20:27:59.248384   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.248392   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:59.248398   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:59.248445   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:59.281770   67607 cri.go:89] found id: ""
	I0829 20:27:59.281797   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.281805   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:59.281811   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:59.281870   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:59.317255   67607 cri.go:89] found id: ""
	I0829 20:27:59.317285   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.317295   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:59.317302   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:59.317363   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:59.354301   67607 cri.go:89] found id: ""
	I0829 20:27:59.354324   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.354332   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:59.354339   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:59.354352   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:59.438346   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:59.438382   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:59.482482   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:59.482513   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:59.540926   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:59.540961   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:59.555221   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:59.555258   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:59.622114   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:02.123276   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:02.137435   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:02.137502   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:02.176310   67607 cri.go:89] found id: ""
	I0829 20:28:02.176340   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.176347   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:02.176355   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:02.176414   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:02.216511   67607 cri.go:89] found id: ""
	I0829 20:28:02.216555   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.216562   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:02.216574   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:02.216625   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:02.260116   67607 cri.go:89] found id: ""
	I0829 20:28:02.260149   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.260158   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:02.260164   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:02.260225   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:02.301550   67607 cri.go:89] found id: ""
	I0829 20:28:02.301584   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.301600   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:02.301608   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:02.301692   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:02.335916   67607 cri.go:89] found id: ""
	I0829 20:28:02.335948   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.335959   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:02.335967   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:02.336033   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:02.372479   67607 cri.go:89] found id: ""
	I0829 20:28:02.372507   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.372515   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:02.372522   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:02.372584   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:02.406683   67607 cri.go:89] found id: ""
	I0829 20:28:02.406713   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.406721   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:02.406727   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:02.406774   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:02.443130   67607 cri.go:89] found id: ""
	I0829 20:28:02.443156   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.443164   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:02.443173   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:02.443185   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:02.485747   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:02.485777   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:02.540106   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:02.540143   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:02.556158   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:02.556188   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:02.637870   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:02.637900   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:02.637915   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:00.851416   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:03.351248   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:00.707716   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:02.708204   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:02.994108   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:04.994988   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:05.220330   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:05.233932   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:05.233994   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:05.269046   67607 cri.go:89] found id: ""
	I0829 20:28:05.269072   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.269081   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:05.269087   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:05.269134   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:05.303963   67607 cri.go:89] found id: ""
	I0829 20:28:05.303989   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.303999   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:05.304006   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:05.304065   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:05.340943   67607 cri.go:89] found id: ""
	I0829 20:28:05.340975   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.340985   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:05.340992   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:05.341061   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:05.379551   67607 cri.go:89] found id: ""
	I0829 20:28:05.379582   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.379593   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:05.379601   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:05.379659   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:05.414229   67607 cri.go:89] found id: ""
	I0829 20:28:05.414256   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.414267   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:05.414274   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:05.414339   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:05.450212   67607 cri.go:89] found id: ""
	I0829 20:28:05.450241   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.450251   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:05.450258   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:05.450318   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:05.487415   67607 cri.go:89] found id: ""
	I0829 20:28:05.487451   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.487463   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:05.487470   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:05.487529   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:05.521347   67607 cri.go:89] found id: ""
	I0829 20:28:05.521370   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.521383   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:05.521390   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:05.521402   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:05.572317   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:05.572350   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:05.585651   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:05.585680   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:05.653929   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:05.653950   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:05.653969   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:05.732843   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:05.732873   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:08.281983   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:08.295104   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:08.295166   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:08.328570   67607 cri.go:89] found id: ""
	I0829 20:28:08.328596   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.328605   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:08.328613   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:08.328684   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:08.363567   67607 cri.go:89] found id: ""
	I0829 20:28:08.363595   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.363605   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:08.363613   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:08.363672   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:08.399619   67607 cri.go:89] found id: ""
	I0829 20:28:08.399645   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.399653   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:08.399659   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:08.399707   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:08.439252   67607 cri.go:89] found id: ""
	I0829 20:28:08.439283   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.439294   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:08.439301   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:08.439357   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:08.477730   67607 cri.go:89] found id: ""
	I0829 20:28:08.477754   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.477762   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:08.477768   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:08.477834   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:08.522045   67607 cri.go:89] found id: ""
	I0829 20:28:08.522066   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.522073   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:08.522079   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:08.522137   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:08.560400   67607 cri.go:89] found id: ""
	I0829 20:28:08.560427   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.560434   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:08.560441   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:08.560504   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:08.599111   67607 cri.go:89] found id: ""
	I0829 20:28:08.599140   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.599150   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:08.599161   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:08.599175   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:08.681451   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:08.681487   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:08.722800   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:08.722835   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:08.779058   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:08.779089   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:08.796940   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:08.796963   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 20:28:05.852245   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:08.351402   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:04.708669   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:07.207124   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:07.493431   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:09.493794   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	W0829 20:28:08.868296   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:11.369316   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:11.384150   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:11.384225   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:11.418452   67607 cri.go:89] found id: ""
	I0829 20:28:11.418480   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.418488   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:11.418494   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:11.418555   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:11.451359   67607 cri.go:89] found id: ""
	I0829 20:28:11.451389   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.451400   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:11.451408   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:11.451481   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:11.488408   67607 cri.go:89] found id: ""
	I0829 20:28:11.488436   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.488446   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:11.488453   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:11.488510   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:11.528311   67607 cri.go:89] found id: ""
	I0829 20:28:11.528340   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.528351   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:11.528359   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:11.528412   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:11.571345   67607 cri.go:89] found id: ""
	I0829 20:28:11.571372   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.571382   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:11.571389   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:11.571454   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:11.606812   67607 cri.go:89] found id: ""
	I0829 20:28:11.606839   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.606850   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:11.606857   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:11.606918   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:11.652687   67607 cri.go:89] found id: ""
	I0829 20:28:11.652710   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.652717   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:11.652722   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:11.652781   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:11.687583   67607 cri.go:89] found id: ""
	I0829 20:28:11.687628   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.687645   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:11.687655   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:11.687673   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:11.727052   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:11.727086   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:11.779116   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:11.779155   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:11.792911   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:11.792949   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:11.868415   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:11.868443   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:11.868461   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:10.850225   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:13.351638   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:09.707347   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:11.709556   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:14.206996   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:11.994187   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:14.494457   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:14.447886   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:14.462144   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:14.462221   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:14.499160   67607 cri.go:89] found id: ""
	I0829 20:28:14.499185   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.499193   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:14.499200   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:14.499258   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:14.545736   67607 cri.go:89] found id: ""
	I0829 20:28:14.545764   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.545774   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:14.545780   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:14.545844   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:14.583626   67607 cri.go:89] found id: ""
	I0829 20:28:14.583664   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.583674   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:14.583682   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:14.583744   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:14.619876   67607 cri.go:89] found id: ""
	I0829 20:28:14.619909   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.619917   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:14.619923   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:14.619975   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:14.655750   67607 cri.go:89] found id: ""
	I0829 20:28:14.655778   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.655786   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:14.655791   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:14.655848   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:14.690759   67607 cri.go:89] found id: ""
	I0829 20:28:14.690785   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.690795   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:14.690800   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:14.690850   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:14.727238   67607 cri.go:89] found id: ""
	I0829 20:28:14.727269   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.727282   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:14.727289   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:14.727344   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:14.765962   67607 cri.go:89] found id: ""
	I0829 20:28:14.765996   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.766006   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:14.766017   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:14.766033   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:14.835749   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:14.835779   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:14.835797   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:14.914075   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:14.914112   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:14.952684   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:14.952712   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:15.004598   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:15.004635   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:17.518949   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:17.532175   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:17.532250   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:17.569943   67607 cri.go:89] found id: ""
	I0829 20:28:17.569971   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.569979   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:17.569985   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:17.570044   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:17.605472   67607 cri.go:89] found id: ""
	I0829 20:28:17.605502   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.605510   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:17.605515   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:17.605566   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:17.641568   67607 cri.go:89] found id: ""
	I0829 20:28:17.641593   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.641603   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:17.641610   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:17.641669   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:17.680870   67607 cri.go:89] found id: ""
	I0829 20:28:17.680895   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.680905   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:17.680916   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:17.680981   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:17.723546   67607 cri.go:89] found id: ""
	I0829 20:28:17.723576   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.723587   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:17.723594   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:17.723659   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:17.757934   67607 cri.go:89] found id: ""
	I0829 20:28:17.757962   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.757973   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:17.757980   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:17.758028   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:17.792641   67607 cri.go:89] found id: ""
	I0829 20:28:17.792670   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.792679   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:17.792685   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:17.792738   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:17.830776   67607 cri.go:89] found id: ""
	I0829 20:28:17.830800   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.830807   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:17.830815   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:17.830825   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:17.886331   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:17.886377   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:17.900111   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:17.900135   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:17.969538   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:17.969563   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:17.969577   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:18.050609   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:18.050649   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:15.850497   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:17.851663   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:16.707415   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:19.207313   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:16.994325   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:19.494247   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:20.590686   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:20.605066   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:20.605121   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:20.646028   67607 cri.go:89] found id: ""
	I0829 20:28:20.646058   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.646074   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:20.646082   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:20.646143   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:20.683433   67607 cri.go:89] found id: ""
	I0829 20:28:20.683469   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.683479   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:20.683487   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:20.683567   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:20.722737   67607 cri.go:89] found id: ""
	I0829 20:28:20.722765   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.722775   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:20.722782   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:20.722841   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:20.759777   67607 cri.go:89] found id: ""
	I0829 20:28:20.759800   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.759807   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:20.759812   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:20.759864   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:20.799142   67607 cri.go:89] found id: ""
	I0829 20:28:20.799164   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.799170   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:20.799176   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:20.799223   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:20.838331   67607 cri.go:89] found id: ""
	I0829 20:28:20.838357   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.838365   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:20.838371   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:20.838427   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:20.878066   67607 cri.go:89] found id: ""
	I0829 20:28:20.878099   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.878110   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:20.878117   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:20.878175   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:20.928940   67607 cri.go:89] found id: ""
	I0829 20:28:20.928966   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.928975   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:20.928982   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:20.928993   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:20.984435   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:20.984471   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:21.005860   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:21.005900   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:21.084092   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:21.084123   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:21.084138   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:21.165971   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:21.166009   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:23.705033   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:23.718332   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:23.718390   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:23.753594   67607 cri.go:89] found id: ""
	I0829 20:28:23.753625   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.753635   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:23.753650   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:23.753715   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:23.791840   67607 cri.go:89] found id: ""
	I0829 20:28:23.791864   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.791872   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:23.791878   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:23.791930   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:20.350028   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:22.350487   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:21.207839   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:23.707197   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:21.993965   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:23.994879   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:26.493735   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:23.837815   67607 cri.go:89] found id: ""
	I0829 20:28:23.837839   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.837846   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:23.837851   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:23.837908   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:23.873155   67607 cri.go:89] found id: ""
	I0829 20:28:23.873184   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.873194   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:23.873201   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:23.873265   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:23.908728   67607 cri.go:89] found id: ""
	I0829 20:28:23.908757   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.908768   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:23.908774   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:23.908834   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:23.946286   67607 cri.go:89] found id: ""
	I0829 20:28:23.946310   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.946320   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:23.946328   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:23.946392   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:23.983078   67607 cri.go:89] found id: ""
	I0829 20:28:23.983105   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.983115   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:23.983129   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:23.983190   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:24.020601   67607 cri.go:89] found id: ""
	I0829 20:28:24.020634   67607 logs.go:276] 0 containers: []
	W0829 20:28:24.020644   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:24.020654   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:24.020669   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:24.034438   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:24.034463   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:24.103209   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:24.103230   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:24.103243   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:24.182977   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:24.183016   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:24.224743   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:24.224834   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:26.781507   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:26.794301   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:26.794387   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:26.827218   67607 cri.go:89] found id: ""
	I0829 20:28:26.827243   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.827250   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:26.827257   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:26.827303   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:26.862643   67607 cri.go:89] found id: ""
	I0829 20:28:26.862673   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.862685   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:26.862693   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:26.862743   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:26.898127   67607 cri.go:89] found id: ""
	I0829 20:28:26.898159   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.898169   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:26.898177   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:26.898237   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:26.932119   67607 cri.go:89] found id: ""
	I0829 20:28:26.932146   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.932167   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:26.932174   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:26.932241   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:26.966380   67607 cri.go:89] found id: ""
	I0829 20:28:26.966413   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.966421   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:26.966427   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:26.966478   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:27.004350   67607 cri.go:89] found id: ""
	I0829 20:28:27.004372   67607 logs.go:276] 0 containers: []
	W0829 20:28:27.004379   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:27.004386   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:27.004436   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:27.041171   67607 cri.go:89] found id: ""
	I0829 20:28:27.041199   67607 logs.go:276] 0 containers: []
	W0829 20:28:27.041206   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:27.041212   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:27.041257   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:27.073993   67607 cri.go:89] found id: ""
	I0829 20:28:27.074031   67607 logs.go:276] 0 containers: []
	W0829 20:28:27.074041   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:27.074053   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:27.074066   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:27.148169   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:27.148199   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:27.148214   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:27.227174   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:27.227212   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:27.267180   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:27.267230   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:27.319034   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:27.319066   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:24.350754   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:26.850582   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:26.207974   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:28.707820   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:28.494090   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:30.994157   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:29.833497   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:29.846883   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:29.846951   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:29.884133   67607 cri.go:89] found id: ""
	I0829 20:28:29.884163   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.884175   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:29.884182   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:29.884247   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:29.917594   67607 cri.go:89] found id: ""
	I0829 20:28:29.917618   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.917628   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:29.917636   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:29.917696   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:29.952537   67607 cri.go:89] found id: ""
	I0829 20:28:29.952568   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.952576   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:29.952582   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:29.952630   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:29.988410   67607 cri.go:89] found id: ""
	I0829 20:28:29.988441   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.988448   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:29.988454   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:29.988511   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:30.026761   67607 cri.go:89] found id: ""
	I0829 20:28:30.026788   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.026796   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:30.026802   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:30.026861   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:30.063010   67607 cri.go:89] found id: ""
	I0829 20:28:30.063037   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.063046   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:30.063054   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:30.063109   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:30.098067   67607 cri.go:89] found id: ""
	I0829 20:28:30.098093   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.098101   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:30.098107   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:30.098161   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:30.132887   67607 cri.go:89] found id: ""
	I0829 20:28:30.132914   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.132921   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:30.132928   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:30.132940   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:30.184955   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:30.184990   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:30.198966   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:30.199004   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:30.268950   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:30.268977   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:30.268991   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:30.354222   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:30.354260   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:32.896554   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:32.911188   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:32.911271   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:32.945726   67607 cri.go:89] found id: ""
	I0829 20:28:32.945750   67607 logs.go:276] 0 containers: []
	W0829 20:28:32.945758   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:32.945773   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:32.945829   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:32.980234   67607 cri.go:89] found id: ""
	I0829 20:28:32.980267   67607 logs.go:276] 0 containers: []
	W0829 20:28:32.980275   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:32.980281   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:32.980329   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:33.019031   67607 cri.go:89] found id: ""
	I0829 20:28:33.019063   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.019071   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:33.019076   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:33.019126   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:33.056290   67607 cri.go:89] found id: ""
	I0829 20:28:33.056314   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.056322   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:33.056327   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:33.056391   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:33.090038   67607 cri.go:89] found id: ""
	I0829 20:28:33.090068   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.090078   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:33.090086   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:33.090152   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:33.125742   67607 cri.go:89] found id: ""
	I0829 20:28:33.125774   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.125782   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:33.125787   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:33.125849   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:33.159019   67607 cri.go:89] found id: ""
	I0829 20:28:33.159047   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.159058   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:33.159065   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:33.159125   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:33.197900   67607 cri.go:89] found id: ""
	I0829 20:28:33.197925   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.197933   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:33.197941   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:33.197955   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:33.250010   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:33.250040   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:33.263348   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:33.263374   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:33.342037   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:33.342065   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:33.342082   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:33.423324   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:33.423361   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:29.350275   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:31.350994   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:33.850866   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:30.713472   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:33.207271   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:32.995169   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:35.493980   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:35.963734   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:35.978648   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:35.978713   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:36.015326   67607 cri.go:89] found id: ""
	I0829 20:28:36.015350   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.015358   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:36.015364   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:36.015411   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:36.050840   67607 cri.go:89] found id: ""
	I0829 20:28:36.050869   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.050879   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:36.050886   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:36.050947   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:36.084048   67607 cri.go:89] found id: ""
	I0829 20:28:36.084076   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.084084   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:36.084090   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:36.084138   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:36.118655   67607 cri.go:89] found id: ""
	I0829 20:28:36.118682   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.118693   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:36.118702   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:36.118762   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:36.153879   67607 cri.go:89] found id: ""
	I0829 20:28:36.153908   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.153918   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:36.153926   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:36.153988   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:36.199834   67607 cri.go:89] found id: ""
	I0829 20:28:36.199858   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.199866   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:36.199872   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:36.199927   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:36.238098   67607 cri.go:89] found id: ""
	I0829 20:28:36.238129   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.238139   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:36.238146   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:36.238208   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:36.272091   67607 cri.go:89] found id: ""
	I0829 20:28:36.272124   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.272135   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:36.272146   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:36.272162   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:36.338478   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:36.338498   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:36.338510   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:36.418637   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:36.418671   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:36.458167   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:36.458194   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:36.508592   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:36.508630   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:36.351066   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:38.849684   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:35.706813   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:37.708058   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:38.003178   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:40.493065   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:39.022668   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:39.035897   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:39.035971   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:39.071155   67607 cri.go:89] found id: ""
	I0829 20:28:39.071185   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.071196   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:39.071203   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:39.071258   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:39.104135   67607 cri.go:89] found id: ""
	I0829 20:28:39.104177   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.104188   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:39.104206   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:39.104266   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:39.138301   67607 cri.go:89] found id: ""
	I0829 20:28:39.138329   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.138339   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:39.138346   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:39.138404   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:39.172674   67607 cri.go:89] found id: ""
	I0829 20:28:39.172700   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.172708   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:39.172719   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:39.172779   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:39.209810   67607 cri.go:89] found id: ""
	I0829 20:28:39.209836   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.209845   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:39.209852   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:39.209915   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:39.248692   67607 cri.go:89] found id: ""
	I0829 20:28:39.248715   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.248722   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:39.248728   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:39.248798   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:39.284303   67607 cri.go:89] found id: ""
	I0829 20:28:39.284333   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.284343   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:39.284351   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:39.284401   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:39.321346   67607 cri.go:89] found id: ""
	I0829 20:28:39.321375   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.321386   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:39.321396   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:39.321410   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:39.334678   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:39.334710   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:39.421992   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:39.422014   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:39.422027   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:39.503250   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:39.503280   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:39.540623   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:39.540654   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:42.092131   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:42.105440   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:42.105498   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:42.140994   67607 cri.go:89] found id: ""
	I0829 20:28:42.141024   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.141034   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:42.141042   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:42.141102   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:42.175182   67607 cri.go:89] found id: ""
	I0829 20:28:42.175217   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.175228   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:42.175248   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:42.175319   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:42.209251   67607 cri.go:89] found id: ""
	I0829 20:28:42.209281   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.209291   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:42.209299   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:42.209362   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:42.247944   67607 cri.go:89] found id: ""
	I0829 20:28:42.247970   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.247977   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:42.247983   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:42.248028   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:42.285613   67607 cri.go:89] found id: ""
	I0829 20:28:42.285644   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.285651   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:42.285657   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:42.285722   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:42.319826   67607 cri.go:89] found id: ""
	I0829 20:28:42.319851   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.319858   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:42.319864   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:42.319928   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:42.357150   67607 cri.go:89] found id: ""
	I0829 20:28:42.357173   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.357182   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:42.357189   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:42.357243   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:42.392150   67607 cri.go:89] found id: ""
	I0829 20:28:42.392170   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.392178   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:42.392185   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:42.392197   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:42.469240   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:42.469271   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:42.469286   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:42.549165   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:42.549198   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:42.591900   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:42.591930   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:42.642593   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:42.642625   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:40.851544   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:43.350420   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:39.708341   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:42.206888   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:44.207934   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:42.494791   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:44.992992   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:45.157092   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:45.170832   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:45.170916   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:45.207210   67607 cri.go:89] found id: ""
	I0829 20:28:45.207235   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.207244   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:45.207251   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:45.207308   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:45.245321   67607 cri.go:89] found id: ""
	I0829 20:28:45.245352   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.245362   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:45.245379   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:45.245448   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:45.280326   67607 cri.go:89] found id: ""
	I0829 20:28:45.280369   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.280381   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:45.280389   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:45.280451   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:45.318294   67607 cri.go:89] found id: ""
	I0829 20:28:45.318322   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.318333   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:45.318340   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:45.318411   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:45.352903   67607 cri.go:89] found id: ""
	I0829 20:28:45.352925   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.352932   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:45.352938   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:45.352990   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:45.389251   67607 cri.go:89] found id: ""
	I0829 20:28:45.389273   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.389280   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:45.389286   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:45.389340   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:45.424348   67607 cri.go:89] found id: ""
	I0829 20:28:45.424385   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.424397   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:45.424404   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:45.424453   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:45.459058   67607 cri.go:89] found id: ""
	I0829 20:28:45.459087   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.459098   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:45.459109   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:45.459124   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:45.510386   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:45.510423   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:45.524896   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:45.524923   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:45.593987   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:45.594064   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:45.594082   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:45.668738   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:45.668771   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:48.206497   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:48.219625   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:48.219696   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:48.254936   67607 cri.go:89] found id: ""
	I0829 20:28:48.254959   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.254966   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:48.254971   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:48.255018   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:48.290826   67607 cri.go:89] found id: ""
	I0829 20:28:48.290851   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.290859   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:48.290864   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:48.290910   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:48.327508   67607 cri.go:89] found id: ""
	I0829 20:28:48.327533   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.327540   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:48.327546   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:48.327593   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:48.364492   67607 cri.go:89] found id: ""
	I0829 20:28:48.364517   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.364525   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:48.364530   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:48.364580   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:48.400035   67607 cri.go:89] found id: ""
	I0829 20:28:48.400062   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.400072   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:48.400079   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:48.400144   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:48.433999   67607 cri.go:89] found id: ""
	I0829 20:28:48.434026   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.434035   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:48.434043   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:48.434104   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:48.468841   67607 cri.go:89] found id: ""
	I0829 20:28:48.468873   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.468889   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:48.468903   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:48.468971   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:48.506557   67607 cri.go:89] found id: ""
	I0829 20:28:48.506589   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.506598   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:48.506609   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:48.506624   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:48.577023   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:48.577044   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:48.577056   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:48.654372   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:48.654407   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:48.691125   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:48.691152   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:48.746383   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:48.746414   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:45.350581   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:47.351437   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:46.705575   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:48.707018   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:46.993532   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:48.994284   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:51.494177   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:51.260591   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:51.273911   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:51.273974   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:51.311517   67607 cri.go:89] found id: ""
	I0829 20:28:51.311545   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.311553   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:51.311567   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:51.311616   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:51.348220   67607 cri.go:89] found id: ""
	I0829 20:28:51.348247   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.348256   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:51.348264   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:51.348321   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:51.383560   67607 cri.go:89] found id: ""
	I0829 20:28:51.383599   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.383611   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:51.383619   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:51.383680   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:51.419241   67607 cri.go:89] found id: ""
	I0829 20:28:51.419268   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.419278   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:51.419286   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:51.419343   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:51.453954   67607 cri.go:89] found id: ""
	I0829 20:28:51.453979   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.453986   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:51.453992   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:51.454047   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:51.489457   67607 cri.go:89] found id: ""
	I0829 20:28:51.489480   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.489488   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:51.489493   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:51.489544   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:51.524072   67607 cri.go:89] found id: ""
	I0829 20:28:51.524100   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.524107   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:51.524113   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:51.524160   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:51.561238   67607 cri.go:89] found id: ""
	I0829 20:28:51.561263   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.561271   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:51.561279   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:51.561290   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:51.615422   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:51.615462   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:51.632180   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:51.632216   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:51.704335   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:51.704363   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:51.704378   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:51.794219   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:51.794260   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:49.852140   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:52.351142   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:51.205903   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:53.207651   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:53.495412   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:55.993489   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:54.342556   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:54.356325   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:54.356400   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:54.390928   67607 cri.go:89] found id: ""
	I0829 20:28:54.390952   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.390959   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:54.390965   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:54.391011   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:54.426970   67607 cri.go:89] found id: ""
	I0829 20:28:54.427002   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.427013   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:54.427020   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:54.427074   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:54.464121   67607 cri.go:89] found id: ""
	I0829 20:28:54.464155   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.464166   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:54.464174   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:54.464236   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:54.499790   67607 cri.go:89] found id: ""
	I0829 20:28:54.499816   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.499827   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:54.499840   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:54.499889   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:54.537212   67607 cri.go:89] found id: ""
	I0829 20:28:54.537239   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.537249   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:54.537256   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:54.537314   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:54.575370   67607 cri.go:89] found id: ""
	I0829 20:28:54.575399   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.575410   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:54.575417   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:54.575469   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:54.608403   67607 cri.go:89] found id: ""
	I0829 20:28:54.608432   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.608443   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:54.608453   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:54.608514   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:54.645259   67607 cri.go:89] found id: ""
	I0829 20:28:54.645285   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.645292   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:54.645300   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:54.645311   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:54.697022   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:54.697063   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:54.712873   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:54.712914   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:54.814253   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:54.814278   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:54.814295   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:54.896473   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:54.896507   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:57.441648   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:57.455245   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:57.455321   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:57.495365   67607 cri.go:89] found id: ""
	I0829 20:28:57.495397   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.495405   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:57.495411   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:57.495472   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:57.529555   67607 cri.go:89] found id: ""
	I0829 20:28:57.529582   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.529590   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:57.529597   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:57.529667   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:57.564168   67607 cri.go:89] found id: ""
	I0829 20:28:57.564196   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.564208   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:57.564215   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:57.564277   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:57.602057   67607 cri.go:89] found id: ""
	I0829 20:28:57.602089   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.602100   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:57.602108   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:57.602194   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:57.638195   67607 cri.go:89] found id: ""
	I0829 20:28:57.638226   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.638235   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:57.638244   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:57.638307   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:57.674556   67607 cri.go:89] found id: ""
	I0829 20:28:57.674605   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.674615   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:57.674623   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:57.674680   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:57.709256   67607 cri.go:89] found id: ""
	I0829 20:28:57.709282   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.709291   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:57.709298   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:57.709358   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:57.743629   67607 cri.go:89] found id: ""
	I0829 20:28:57.743652   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.743659   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:57.743668   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:57.743679   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:57.789067   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:57.789098   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:57.843372   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:57.843403   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:57.858630   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:57.858661   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:57.927776   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:57.927798   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:57.927814   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:54.850906   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:56.851300   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:55.208638   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:57.707756   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:57.994287   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:00.493343   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:00.508180   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:00.521451   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:00.521529   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:00.557912   67607 cri.go:89] found id: ""
	I0829 20:29:00.557938   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.557945   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:00.557951   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:00.557997   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:00.595186   67607 cri.go:89] found id: ""
	I0829 20:29:00.595215   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.595226   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:00.595237   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:00.595299   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:00.631553   67607 cri.go:89] found id: ""
	I0829 20:29:00.631581   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.631592   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:00.631600   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:00.631660   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:00.666502   67607 cri.go:89] found id: ""
	I0829 20:29:00.666525   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.666551   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:00.666560   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:00.666621   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:00.700797   67607 cri.go:89] found id: ""
	I0829 20:29:00.700824   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.700835   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:00.700842   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:00.700908   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:00.739957   67607 cri.go:89] found id: ""
	I0829 20:29:00.739976   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.739989   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:00.739994   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:00.740035   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:00.800704   67607 cri.go:89] found id: ""
	I0829 20:29:00.800740   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.800750   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:00.800757   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:00.800820   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:00.837678   67607 cri.go:89] found id: ""
	I0829 20:29:00.837704   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.837712   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:00.837720   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:00.837731   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:00.888359   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:00.888391   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:00.903074   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:00.903103   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:00.964865   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:00.964885   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:00.964898   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:01.049351   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:01.049387   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:03.589829   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:03.603120   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:03.603192   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:03.637647   67607 cri.go:89] found id: ""
	I0829 20:29:03.637672   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.637678   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:03.637684   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:03.637732   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:03.673807   67607 cri.go:89] found id: ""
	I0829 20:29:03.673842   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.673852   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:03.673860   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:03.673918   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:03.709490   67607 cri.go:89] found id: ""
	I0829 20:29:03.709516   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.709527   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:03.709533   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:03.709595   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:03.751662   67607 cri.go:89] found id: ""
	I0829 20:29:03.751688   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.751696   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:03.751702   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:03.751751   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:03.787861   67607 cri.go:89] found id: ""
	I0829 20:29:03.787896   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.787908   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:03.787917   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:03.787977   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:59.350888   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:01.850615   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:03.851438   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:00.207912   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:02.707309   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:02.493506   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:04.494305   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:03.824383   67607 cri.go:89] found id: ""
	I0829 20:29:03.824413   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.824431   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:03.824438   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:03.824499   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:03.863904   67607 cri.go:89] found id: ""
	I0829 20:29:03.863929   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.863937   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:03.863943   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:03.863990   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:03.902336   67607 cri.go:89] found id: ""
	I0829 20:29:03.902360   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.902368   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:03.902375   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:03.902386   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:03.951468   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:03.951499   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:03.965789   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:03.965816   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:04.035096   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:04.035119   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:04.035193   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:04.115842   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:04.115876   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:06.662652   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:06.676508   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:06.676583   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:06.713058   67607 cri.go:89] found id: ""
	I0829 20:29:06.713084   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.713093   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:06.713101   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:06.713171   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:06.747513   67607 cri.go:89] found id: ""
	I0829 20:29:06.747544   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.747552   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:06.747557   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:06.747617   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:06.782662   67607 cri.go:89] found id: ""
	I0829 20:29:06.782689   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.782695   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:06.782701   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:06.782758   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:06.818472   67607 cri.go:89] found id: ""
	I0829 20:29:06.818500   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.818510   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:06.818516   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:06.818586   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:06.852928   67607 cri.go:89] found id: ""
	I0829 20:29:06.852954   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.852964   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:06.852974   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:06.853032   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:06.893859   67607 cri.go:89] found id: ""
	I0829 20:29:06.893889   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.893899   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:06.893907   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:06.893969   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:06.931552   67607 cri.go:89] found id: ""
	I0829 20:29:06.931584   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.931594   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:06.931601   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:06.931662   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:06.967210   67607 cri.go:89] found id: ""
	I0829 20:29:06.967243   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.967254   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:06.967266   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:06.967279   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:07.020595   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:07.020631   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:07.034738   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:07.034764   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:07.103726   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:07.103747   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:07.103760   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:07.184727   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:07.184764   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:06.350610   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:08.351571   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:05.207055   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:07.207650   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:06.994653   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:09.493932   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:09.746639   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:09.761228   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:09.761308   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:09.802071   67607 cri.go:89] found id: ""
	I0829 20:29:09.802102   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.802113   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:09.802122   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:09.802180   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:09.837352   67607 cri.go:89] found id: ""
	I0829 20:29:09.837385   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.837395   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:09.837402   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:09.837464   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:09.874951   67607 cri.go:89] found id: ""
	I0829 20:29:09.874980   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.874992   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:09.874999   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:09.875055   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:09.909660   67607 cri.go:89] found id: ""
	I0829 20:29:09.909696   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.909706   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:09.909713   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:09.909777   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:09.949727   67607 cri.go:89] found id: ""
	I0829 20:29:09.949751   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.949759   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:09.949765   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:09.949825   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:09.984576   67607 cri.go:89] found id: ""
	I0829 20:29:09.984609   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.984617   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:09.984623   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:09.984675   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:10.022499   67607 cri.go:89] found id: ""
	I0829 20:29:10.022523   67607 logs.go:276] 0 containers: []
	W0829 20:29:10.022530   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:10.022553   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:10.022624   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:10.064308   67607 cri.go:89] found id: ""
	I0829 20:29:10.064346   67607 logs.go:276] 0 containers: []
	W0829 20:29:10.064356   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:10.064367   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:10.064382   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:10.113505   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:10.113537   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:10.127614   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:10.127640   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:10.200558   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:10.200579   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:10.200592   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:10.292984   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:10.293020   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:12.833100   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:12.846645   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:12.846712   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:12.885396   67607 cri.go:89] found id: ""
	I0829 20:29:12.885423   67607 logs.go:276] 0 containers: []
	W0829 20:29:12.885430   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:12.885436   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:12.885486   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:12.922556   67607 cri.go:89] found id: ""
	I0829 20:29:12.922584   67607 logs.go:276] 0 containers: []
	W0829 20:29:12.922595   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:12.922602   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:12.922688   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:12.965294   67607 cri.go:89] found id: ""
	I0829 20:29:12.965324   67607 logs.go:276] 0 containers: []
	W0829 20:29:12.965335   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:12.965342   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:12.965401   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:13.022911   67607 cri.go:89] found id: ""
	I0829 20:29:13.022934   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.022942   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:13.022948   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:13.023009   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:13.077009   67607 cri.go:89] found id: ""
	I0829 20:29:13.077035   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.077043   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:13.077048   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:13.077095   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:13.114202   67607 cri.go:89] found id: ""
	I0829 20:29:13.114233   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.114243   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:13.114251   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:13.114315   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:13.147025   67607 cri.go:89] found id: ""
	I0829 20:29:13.147049   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.147057   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:13.147063   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:13.147110   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:13.183112   67607 cri.go:89] found id: ""
	I0829 20:29:13.183138   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.183148   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:13.183159   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:13.183173   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:13.240558   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:13.240595   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:13.255563   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:13.255589   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:13.322826   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:13.322846   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:13.322857   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:13.399330   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:13.399365   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:10.850650   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:12.852188   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:09.706791   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:11.707397   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:13.708663   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:11.993311   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:13.994310   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:16.494854   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:15.938467   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:15.951742   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:15.951812   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:15.987492   67607 cri.go:89] found id: ""
	I0829 20:29:15.987517   67607 logs.go:276] 0 containers: []
	W0829 20:29:15.987524   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:15.987530   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:15.987575   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:16.024187   67607 cri.go:89] found id: ""
	I0829 20:29:16.024214   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.024223   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:16.024231   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:16.024291   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:16.058141   67607 cri.go:89] found id: ""
	I0829 20:29:16.058164   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.058171   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:16.058176   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:16.058225   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:16.092390   67607 cri.go:89] found id: ""
	I0829 20:29:16.092414   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.092421   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:16.092427   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:16.092472   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:16.130178   67607 cri.go:89] found id: ""
	I0829 20:29:16.130209   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.130219   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:16.130227   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:16.130289   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:16.163867   67607 cri.go:89] found id: ""
	I0829 20:29:16.163900   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.163907   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:16.163913   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:16.163964   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:16.197764   67607 cri.go:89] found id: ""
	I0829 20:29:16.197792   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.197798   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:16.197804   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:16.197850   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:16.233357   67607 cri.go:89] found id: ""
	I0829 20:29:16.233383   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.233393   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:16.233403   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:16.233418   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:16.285154   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:16.285188   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:16.299057   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:16.299085   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:16.377021   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:16.377041   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:16.377062   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:16.457750   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:16.457796   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:15.350415   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:17.850927   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:16.206841   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:18.207273   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:18.993478   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:21.493806   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:18.999133   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:19.016143   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:19.016223   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:19.049225   67607 cri.go:89] found id: ""
	I0829 20:29:19.049252   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.049259   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:19.049265   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:19.049317   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:19.085237   67607 cri.go:89] found id: ""
	I0829 20:29:19.085297   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.085314   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:19.085325   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:19.085389   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:19.123476   67607 cri.go:89] found id: ""
	I0829 20:29:19.123501   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.123509   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:19.123514   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:19.123571   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:19.159958   67607 cri.go:89] found id: ""
	I0829 20:29:19.159984   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.159993   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:19.160001   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:19.160055   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:19.192385   67607 cri.go:89] found id: ""
	I0829 20:29:19.192410   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.192418   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:19.192423   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:19.192483   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:19.230781   67607 cri.go:89] found id: ""
	I0829 20:29:19.230804   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.230811   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:19.230816   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:19.230868   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:19.264925   67607 cri.go:89] found id: ""
	I0829 20:29:19.264954   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.264964   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:19.264972   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:19.265032   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:19.302461   67607 cri.go:89] found id: ""
	I0829 20:29:19.302484   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.302491   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:19.302499   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:19.302510   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:19.384799   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:19.384833   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:19.425281   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:19.425313   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:19.477380   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:19.477412   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:19.492315   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:19.492350   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:19.563428   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:22.064407   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:22.078609   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:22.078670   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:22.112630   67607 cri.go:89] found id: ""
	I0829 20:29:22.112662   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.112672   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:22.112680   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:22.112741   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:22.149078   67607 cri.go:89] found id: ""
	I0829 20:29:22.149108   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.149117   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:22.149124   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:22.149186   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:22.184568   67607 cri.go:89] found id: ""
	I0829 20:29:22.184596   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.184605   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:22.184613   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:22.184682   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:22.220881   67607 cri.go:89] found id: ""
	I0829 20:29:22.220908   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.220919   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:22.220926   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:22.220987   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:22.256280   67607 cri.go:89] found id: ""
	I0829 20:29:22.256305   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.256314   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:22.256321   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:22.256386   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:22.294546   67607 cri.go:89] found id: ""
	I0829 20:29:22.294580   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.294590   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:22.294597   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:22.294660   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:22.332178   67607 cri.go:89] found id: ""
	I0829 20:29:22.332207   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.332215   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:22.332220   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:22.332266   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:22.368283   67607 cri.go:89] found id: ""
	I0829 20:29:22.368309   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.368317   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:22.368325   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:22.368336   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:22.421800   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:22.421836   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:22.435539   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:22.435565   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:22.504402   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:22.504427   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:22.504441   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:22.588293   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:22.588326   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:19.851801   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:22.351929   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:20.207342   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:22.707546   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:23.493994   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:25.993337   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:25.130766   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:25.144479   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:25.144554   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:25.181606   67607 cri.go:89] found id: ""
	I0829 20:29:25.181636   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.181643   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:25.181649   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:25.181697   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:25.220291   67607 cri.go:89] found id: ""
	I0829 20:29:25.220320   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.220328   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:25.220335   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:25.220447   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:25.260947   67607 cri.go:89] found id: ""
	I0829 20:29:25.260975   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.260983   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:25.260988   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:25.261035   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:25.298200   67607 cri.go:89] found id: ""
	I0829 20:29:25.298232   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.298243   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:25.298256   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:25.298314   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:25.333128   67607 cri.go:89] found id: ""
	I0829 20:29:25.333162   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.333174   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:25.333181   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:25.333232   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:25.368951   67607 cri.go:89] found id: ""
	I0829 20:29:25.368979   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.368989   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:25.368997   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:25.369052   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:25.403687   67607 cri.go:89] found id: ""
	I0829 20:29:25.403715   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.403726   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:25.403734   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:25.403799   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:25.442338   67607 cri.go:89] found id: ""
	I0829 20:29:25.442365   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.442372   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:25.442381   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:25.442395   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:25.456313   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:25.456335   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:25.528709   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:25.528730   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:25.528744   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:25.609976   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:25.610011   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:25.650044   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:25.650071   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:28.202683   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:28.216971   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:28.217046   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:28.256297   67607 cri.go:89] found id: ""
	I0829 20:29:28.256321   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.256329   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:28.256335   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:28.256379   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:28.289396   67607 cri.go:89] found id: ""
	I0829 20:29:28.289420   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.289427   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:28.289433   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:28.289484   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:28.323589   67607 cri.go:89] found id: ""
	I0829 20:29:28.323616   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.323623   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:28.323630   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:28.323676   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:28.362423   67607 cri.go:89] found id: ""
	I0829 20:29:28.362453   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.362463   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:28.362471   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:28.362531   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:28.396967   67607 cri.go:89] found id: ""
	I0829 20:29:28.396990   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.396998   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:28.397003   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:28.397053   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:28.430714   67607 cri.go:89] found id: ""
	I0829 20:29:28.430744   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.430755   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:28.430762   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:28.430831   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:28.468668   67607 cri.go:89] found id: ""
	I0829 20:29:28.468696   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.468707   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:28.468714   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:28.468777   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:28.506678   67607 cri.go:89] found id: ""
	I0829 20:29:28.506705   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.506716   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:28.506727   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:28.506741   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:28.545259   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:28.545287   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:28.598249   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:28.598285   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:28.612385   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:28.612429   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:28.685765   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:28.685792   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:28.685806   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:24.851688   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:27.350456   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:24.708523   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:27.206094   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:29.207859   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:27.995492   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:30.494340   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:31.270074   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:31.284357   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:31.284417   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:31.319530   67607 cri.go:89] found id: ""
	I0829 20:29:31.319558   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.319566   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:31.319571   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:31.319640   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:31.356826   67607 cri.go:89] found id: ""
	I0829 20:29:31.356856   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.356867   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:31.356880   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:31.356934   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:31.390137   67607 cri.go:89] found id: ""
	I0829 20:29:31.390160   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.390167   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:31.390173   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:31.390219   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:31.424939   67607 cri.go:89] found id: ""
	I0829 20:29:31.424972   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.424989   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:31.424997   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:31.425054   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:31.460896   67607 cri.go:89] found id: ""
	I0829 20:29:31.460921   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.460928   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:31.460935   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:31.460985   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:31.498933   67607 cri.go:89] found id: ""
	I0829 20:29:31.498957   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.498967   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:31.498975   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:31.499044   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:31.534953   67607 cri.go:89] found id: ""
	I0829 20:29:31.534985   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.534996   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:31.535003   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:31.535065   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:31.576248   67607 cri.go:89] found id: ""
	I0829 20:29:31.576273   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.576281   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:31.576291   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:31.576307   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:31.628157   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:31.628196   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:31.641564   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:31.641591   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:31.719949   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:31.719973   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:31.719996   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:31.795682   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:31.795716   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:29.351248   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:31.351424   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:33.851397   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:31.707552   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:34.207468   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:32.993432   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:34.993634   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:34.333468   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:34.347294   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:34.347370   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:34.384885   67607 cri.go:89] found id: ""
	I0829 20:29:34.384910   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.384921   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:34.384928   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:34.384991   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:34.422309   67607 cri.go:89] found id: ""
	I0829 20:29:34.422341   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.422351   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:34.422358   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:34.422417   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:34.459800   67607 cri.go:89] found id: ""
	I0829 20:29:34.459826   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.459834   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:34.459840   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:34.459905   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:34.495600   67607 cri.go:89] found id: ""
	I0829 20:29:34.495624   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.495633   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:34.495647   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:34.495708   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:34.531749   67607 cri.go:89] found id: ""
	I0829 20:29:34.531777   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.531788   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:34.531795   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:34.531856   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:34.571057   67607 cri.go:89] found id: ""
	I0829 20:29:34.571088   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.571098   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:34.571105   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:34.571168   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:34.609645   67607 cri.go:89] found id: ""
	I0829 20:29:34.609676   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.609687   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:34.609695   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:34.609753   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:34.647199   67607 cri.go:89] found id: ""
	I0829 20:29:34.647233   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.647244   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:34.647255   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:34.647269   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:34.661390   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:34.661420   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:34.737590   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:34.737613   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:34.737625   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:34.820682   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:34.820721   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:34.861697   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:34.861723   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:37.412384   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:37.426081   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:37.426162   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:37.461302   67607 cri.go:89] found id: ""
	I0829 20:29:37.461332   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.461342   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:37.461349   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:37.461416   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:37.500869   67607 cri.go:89] found id: ""
	I0829 20:29:37.500898   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.500908   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:37.500915   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:37.500970   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:37.536908   67607 cri.go:89] found id: ""
	I0829 20:29:37.536932   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.536942   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:37.536949   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:37.537010   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:37.571939   67607 cri.go:89] found id: ""
	I0829 20:29:37.571969   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.571979   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:37.571987   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:37.572048   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:37.607834   67607 cri.go:89] found id: ""
	I0829 20:29:37.607864   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.607883   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:37.607891   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:37.607952   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:37.643932   67607 cri.go:89] found id: ""
	I0829 20:29:37.643963   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.643971   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:37.643978   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:37.644037   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:37.678148   67607 cri.go:89] found id: ""
	I0829 20:29:37.678177   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.678188   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:37.678195   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:37.678257   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:37.713170   67607 cri.go:89] found id: ""
	I0829 20:29:37.713195   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.713209   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:37.713219   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:37.713233   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:37.752538   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:37.752567   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:37.802888   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:37.802923   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:37.816546   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:37.816585   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:37.891647   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:37.891667   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:37.891680   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:35.851668   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:38.351371   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:36.208220   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:38.708523   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:36.994441   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:39.493291   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:40.472354   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:40.486186   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:40.486252   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:40.520935   67607 cri.go:89] found id: ""
	I0829 20:29:40.520963   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.520971   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:40.520977   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:40.521037   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:40.561399   67607 cri.go:89] found id: ""
	I0829 20:29:40.561428   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.561440   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:40.561447   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:40.561514   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:40.601821   67607 cri.go:89] found id: ""
	I0829 20:29:40.601846   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.601855   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:40.601862   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:40.601918   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:40.636429   67607 cri.go:89] found id: ""
	I0829 20:29:40.636454   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.636462   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:40.636468   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:40.636525   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:40.670781   67607 cri.go:89] found id: ""
	I0829 20:29:40.670816   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.670828   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:40.670836   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:40.670912   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:40.706635   67607 cri.go:89] found id: ""
	I0829 20:29:40.706663   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.706674   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:40.706682   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:40.706739   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:40.741657   67607 cri.go:89] found id: ""
	I0829 20:29:40.741687   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.741695   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:40.741707   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:40.741770   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:40.777028   67607 cri.go:89] found id: ""
	I0829 20:29:40.777057   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.777066   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:40.777077   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:40.777093   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:40.829387   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:40.829424   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:40.843928   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:40.843956   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:40.917965   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:40.917992   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:40.918008   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:41.001880   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:41.001925   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:43.549007   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:43.563446   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:43.563502   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:43.598503   67607 cri.go:89] found id: ""
	I0829 20:29:43.598548   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.598557   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:43.598564   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:43.598614   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:43.634169   67607 cri.go:89] found id: ""
	I0829 20:29:43.634200   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.634210   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:43.634218   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:43.634280   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:43.670467   67607 cri.go:89] found id: ""
	I0829 20:29:43.670492   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.670500   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:43.670506   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:43.670580   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:43.706812   67607 cri.go:89] found id: ""
	I0829 20:29:43.706839   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.706849   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:43.706857   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:43.706922   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:43.741577   67607 cri.go:89] found id: ""
	I0829 20:29:43.741606   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.741612   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:43.741620   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:43.741700   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:43.776552   67607 cri.go:89] found id: ""
	I0829 20:29:43.776595   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.776625   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:43.776635   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:43.776701   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:43.816229   67607 cri.go:89] found id: ""
	I0829 20:29:43.816264   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.816274   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:43.816281   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:43.816346   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:40.850705   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:42.850904   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:40.709080   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:43.207700   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:41.994216   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:44.492986   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:46.494171   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:43.860726   67607 cri.go:89] found id: ""
	I0829 20:29:43.860753   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.860761   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:43.860768   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:43.860783   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:43.874311   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:43.874340   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:43.952243   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:43.952272   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:43.952288   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:44.032276   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:44.032312   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:44.075537   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:44.075571   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:46.632798   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:46.645878   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:46.645948   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:46.683682   67607 cri.go:89] found id: ""
	I0829 20:29:46.683711   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.683720   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:46.683726   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:46.683775   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:46.727985   67607 cri.go:89] found id: ""
	I0829 20:29:46.728012   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.728024   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:46.728031   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:46.728090   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:46.762142   67607 cri.go:89] found id: ""
	I0829 20:29:46.762166   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.762174   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:46.762180   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:46.762226   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:46.802423   67607 cri.go:89] found id: ""
	I0829 20:29:46.802453   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.802464   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:46.802471   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:46.802515   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:46.840382   67607 cri.go:89] found id: ""
	I0829 20:29:46.840411   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.840418   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:46.840425   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:46.840473   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:46.878438   67607 cri.go:89] found id: ""
	I0829 20:29:46.878466   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.878476   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:46.878483   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:46.878562   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:46.913589   67607 cri.go:89] found id: ""
	I0829 20:29:46.913618   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.913625   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:46.913631   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:46.913678   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:46.948894   67607 cri.go:89] found id: ""
	I0829 20:29:46.948922   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.948929   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:46.948938   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:46.948949   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:47.005709   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:47.005745   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:47.030316   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:47.030343   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:47.105899   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:47.105920   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:47.105932   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:47.189405   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:47.189442   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:45.352639   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:47.850647   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:45.709140   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:48.207411   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:48.994239   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:51.493287   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:49.727745   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:49.742061   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:49.742131   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:49.777428   67607 cri.go:89] found id: ""
	I0829 20:29:49.777456   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.777464   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:49.777471   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:49.777531   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:49.811611   67607 cri.go:89] found id: ""
	I0829 20:29:49.811639   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.811646   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:49.811653   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:49.811709   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:49.844962   67607 cri.go:89] found id: ""
	I0829 20:29:49.844987   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.844995   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:49.845006   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:49.845062   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:49.880259   67607 cri.go:89] found id: ""
	I0829 20:29:49.880286   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.880297   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:49.880305   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:49.880366   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:49.915889   67607 cri.go:89] found id: ""
	I0829 20:29:49.915918   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.915926   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:49.915932   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:49.915988   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:49.953146   67607 cri.go:89] found id: ""
	I0829 20:29:49.953174   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.953182   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:49.953189   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:49.953240   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:49.990689   67607 cri.go:89] found id: ""
	I0829 20:29:49.990721   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.990730   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:49.990738   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:49.990792   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:50.024775   67607 cri.go:89] found id: ""
	I0829 20:29:50.024806   67607 logs.go:276] 0 containers: []
	W0829 20:29:50.024817   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:50.024827   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:50.024842   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:50.079030   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:50.079064   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:50.093178   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:50.093205   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:50.171476   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:50.171499   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:50.171512   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:50.252913   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:50.252946   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:52.799818   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:52.812857   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:52.812930   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:52.850736   67607 cri.go:89] found id: ""
	I0829 20:29:52.850761   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.850770   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:52.850777   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:52.850834   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:52.888892   67607 cri.go:89] found id: ""
	I0829 20:29:52.888916   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.888923   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:52.888929   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:52.888975   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:52.925390   67607 cri.go:89] found id: ""
	I0829 20:29:52.925418   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.925428   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:52.925435   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:52.925501   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:52.960329   67607 cri.go:89] found id: ""
	I0829 20:29:52.960352   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.960360   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:52.960366   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:52.960413   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:52.994899   67607 cri.go:89] found id: ""
	I0829 20:29:52.994927   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.994935   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:52.994941   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:52.994995   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:53.033028   67607 cri.go:89] found id: ""
	I0829 20:29:53.033057   67607 logs.go:276] 0 containers: []
	W0829 20:29:53.033068   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:53.033076   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:53.033136   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:53.068353   67607 cri.go:89] found id: ""
	I0829 20:29:53.068381   67607 logs.go:276] 0 containers: []
	W0829 20:29:53.068389   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:53.068394   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:53.068441   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:53.104496   67607 cri.go:89] found id: ""
	I0829 20:29:53.104524   67607 logs.go:276] 0 containers: []
	W0829 20:29:53.104534   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:53.104545   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:53.104560   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:53.175777   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:53.175810   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:53.175827   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:53.257362   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:53.257396   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:53.295822   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:53.295850   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:53.351237   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:53.351263   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:49.851324   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:52.350768   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:50.707986   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:53.206918   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:53.494828   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:55.994443   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:55.864680   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:55.879324   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:55.879391   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:55.914454   67607 cri.go:89] found id: ""
	I0829 20:29:55.914479   67607 logs.go:276] 0 containers: []
	W0829 20:29:55.914490   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:55.914498   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:55.914592   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:55.953778   67607 cri.go:89] found id: ""
	I0829 20:29:55.953804   67607 logs.go:276] 0 containers: []
	W0829 20:29:55.953814   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:55.953821   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:55.953883   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:55.994659   67607 cri.go:89] found id: ""
	I0829 20:29:55.994681   67607 logs.go:276] 0 containers: []
	W0829 20:29:55.994689   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:55.994697   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:55.994768   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:56.031262   67607 cri.go:89] found id: ""
	I0829 20:29:56.031288   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.031299   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:56.031306   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:56.031366   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:56.063748   67607 cri.go:89] found id: ""
	I0829 20:29:56.063776   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.063785   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:56.063793   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:56.063883   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:56.098024   67607 cri.go:89] found id: ""
	I0829 20:29:56.098060   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.098068   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:56.098074   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:56.098127   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:56.141340   67607 cri.go:89] found id: ""
	I0829 20:29:56.141364   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.141374   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:56.141381   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:56.141440   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:56.176668   67607 cri.go:89] found id: ""
	I0829 20:29:56.176696   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.176707   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:56.176717   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:56.176731   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:56.216294   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:56.216322   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:56.269404   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:56.269440   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:56.283134   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:56.283160   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:56.355005   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:56.355023   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:56.355035   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:54.851658   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:57.350247   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:55.207477   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:57.708007   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:58.493689   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:00.998990   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:58.937406   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:58.950924   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:58.950981   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:58.986748   67607 cri.go:89] found id: ""
	I0829 20:29:58.986778   67607 logs.go:276] 0 containers: []
	W0829 20:29:58.986788   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:58.986795   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:58.986861   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:59.023737   67607 cri.go:89] found id: ""
	I0829 20:29:59.023763   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.023773   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:59.023780   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:59.023840   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:59.060245   67607 cri.go:89] found id: ""
	I0829 20:29:59.060274   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.060284   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:59.060291   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:59.060352   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:59.102467   67607 cri.go:89] found id: ""
	I0829 20:29:59.102493   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.102501   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:59.102507   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:59.102581   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:59.142601   67607 cri.go:89] found id: ""
	I0829 20:29:59.142625   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.142634   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:59.142647   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:59.142717   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:59.186683   67607 cri.go:89] found id: ""
	I0829 20:29:59.186707   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.186715   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:59.186723   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:59.186783   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:59.232104   67607 cri.go:89] found id: ""
	I0829 20:29:59.232136   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.232154   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:59.232162   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:59.232227   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:59.276416   67607 cri.go:89] found id: ""
	I0829 20:29:59.276442   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.276452   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:59.276462   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:59.276479   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:59.341741   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:59.341779   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:59.357312   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:59.357336   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:59.425653   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:59.425674   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:59.425689   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:59.505365   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:59.505403   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:02.049195   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:02.064558   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:02.064641   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:02.102141   67607 cri.go:89] found id: ""
	I0829 20:30:02.102188   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.102209   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:02.102217   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:02.102282   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:02.138610   67607 cri.go:89] found id: ""
	I0829 20:30:02.138640   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.138650   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:02.138658   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:02.138724   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:02.175391   67607 cri.go:89] found id: ""
	I0829 20:30:02.175423   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.175435   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:02.175442   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:02.175505   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:02.212956   67607 cri.go:89] found id: ""
	I0829 20:30:02.212981   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.212991   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:02.212998   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:02.213059   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:02.254444   67607 cri.go:89] found id: ""
	I0829 20:30:02.254467   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.254475   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:02.254481   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:02.254568   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:02.293232   67607 cri.go:89] found id: ""
	I0829 20:30:02.293260   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.293270   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:02.293277   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:02.293348   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:02.328300   67607 cri.go:89] found id: ""
	I0829 20:30:02.328329   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.328339   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:02.328346   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:02.328407   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:02.363467   67607 cri.go:89] found id: ""
	I0829 20:30:02.363495   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.363505   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:02.363514   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:02.363528   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:02.414357   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:02.414394   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:02.428229   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:02.428259   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:02.503640   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:02.503661   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:02.503674   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:02.584052   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:02.584087   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:59.352485   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:01.850334   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:59.717029   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:02.208354   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:03.494326   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:05.494833   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:05.124345   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:05.143530   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:05.143594   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:05.195985   67607 cri.go:89] found id: ""
	I0829 20:30:05.196014   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.196024   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:05.196032   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:05.196092   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:05.254315   67607 cri.go:89] found id: ""
	I0829 20:30:05.254343   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.254354   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:05.254362   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:05.254432   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:05.306756   67607 cri.go:89] found id: ""
	I0829 20:30:05.306781   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.306788   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:05.306794   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:05.306852   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:05.345200   67607 cri.go:89] found id: ""
	I0829 20:30:05.345225   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.345235   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:05.345242   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:05.345297   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:05.384038   67607 cri.go:89] found id: ""
	I0829 20:30:05.384064   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.384074   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:05.384081   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:05.384140   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:05.420177   67607 cri.go:89] found id: ""
	I0829 20:30:05.420201   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.420208   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:05.420214   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:05.420260   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:05.453492   67607 cri.go:89] found id: ""
	I0829 20:30:05.453513   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.453521   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:05.453526   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:05.453573   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:05.491591   67607 cri.go:89] found id: ""
	I0829 20:30:05.491618   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.491628   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:05.491638   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:05.491701   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:05.580458   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:05.580503   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:05.620137   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:05.620169   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:05.672137   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:05.672177   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:05.685946   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:05.685973   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:05.755176   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:08.256255   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:08.269099   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:08.269160   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:08.302552   67607 cri.go:89] found id: ""
	I0829 20:30:08.302578   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.302585   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:08.302591   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:08.302639   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:08.340683   67607 cri.go:89] found id: ""
	I0829 20:30:08.340711   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.340718   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:08.340726   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:08.340778   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:08.387389   67607 cri.go:89] found id: ""
	I0829 20:30:08.387416   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.387424   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:08.387430   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:08.387477   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:08.421303   67607 cri.go:89] found id: ""
	I0829 20:30:08.421330   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.421340   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:08.421348   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:08.421409   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:08.458648   67607 cri.go:89] found id: ""
	I0829 20:30:08.458677   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.458688   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:08.458695   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:08.458758   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:08.498748   67607 cri.go:89] found id: ""
	I0829 20:30:08.498776   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.498784   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:08.498790   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:08.498845   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:08.536859   67607 cri.go:89] found id: ""
	I0829 20:30:08.536889   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.536896   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:08.536902   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:08.536963   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:08.570685   67607 cri.go:89] found id: ""
	I0829 20:30:08.570713   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.570723   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:08.570734   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:08.570748   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:08.621904   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:08.621938   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:08.636367   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:08.636391   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:08.703796   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:08.703824   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:08.703838   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:08.785084   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:08.785120   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:04.350230   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:06.849598   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:08.850961   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:04.708012   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:07.206604   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:09.207368   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:07.993015   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:09.994043   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:11.326633   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:11.339570   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:11.339637   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:11.374132   67607 cri.go:89] found id: ""
	I0829 20:30:11.374155   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.374163   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:11.374169   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:11.374234   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:11.409004   67607 cri.go:89] found id: ""
	I0829 20:30:11.409036   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.409047   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:11.409054   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:11.409119   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:11.444598   67607 cri.go:89] found id: ""
	I0829 20:30:11.444625   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.444635   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:11.444643   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:11.444704   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:11.481912   67607 cri.go:89] found id: ""
	I0829 20:30:11.481942   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.481953   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:11.481961   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:11.482025   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:11.516436   67607 cri.go:89] found id: ""
	I0829 20:30:11.516466   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.516477   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:11.516483   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:11.516536   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:11.554762   67607 cri.go:89] found id: ""
	I0829 20:30:11.554787   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.554795   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:11.554801   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:11.554857   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:11.588902   67607 cri.go:89] found id: ""
	I0829 20:30:11.588931   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.588942   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:11.588950   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:11.589011   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:11.621346   67607 cri.go:89] found id: ""
	I0829 20:30:11.621368   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.621376   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:11.621383   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:11.621395   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:11.659671   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:11.659703   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:11.711288   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:11.711315   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:11.725285   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:11.725310   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:11.801713   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:11.801735   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:11.801750   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:10.851075   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:13.349510   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:11.208203   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:13.706599   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:12.494548   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:14.993188   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:14.382313   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:14.395852   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:14.395926   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:14.438735   67607 cri.go:89] found id: ""
	I0829 20:30:14.438762   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.438772   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:14.438778   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:14.438840   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:14.477886   67607 cri.go:89] found id: ""
	I0829 20:30:14.477928   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.477937   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:14.477943   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:14.478000   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:14.517627   67607 cri.go:89] found id: ""
	I0829 20:30:14.517654   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.517664   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:14.517670   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:14.517734   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:14.557247   67607 cri.go:89] found id: ""
	I0829 20:30:14.557272   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.557280   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:14.557286   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:14.557345   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:14.591364   67607 cri.go:89] found id: ""
	I0829 20:30:14.591388   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.591398   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:14.591406   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:14.591468   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:14.627517   67607 cri.go:89] found id: ""
	I0829 20:30:14.627539   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.627546   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:14.627551   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:14.627604   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:14.662388   67607 cri.go:89] found id: ""
	I0829 20:30:14.662409   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.662419   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:14.662432   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:14.662488   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:14.695277   67607 cri.go:89] found id: ""
	I0829 20:30:14.695307   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.695316   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:14.695324   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:14.695335   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:14.735824   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:14.735852   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:14.792607   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:14.792642   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:14.808881   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:14.808910   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:14.879804   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:14.879824   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:14.879837   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:17.459817   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:17.474813   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:17.474887   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:17.509885   67607 cri.go:89] found id: ""
	I0829 20:30:17.509913   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.509923   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:17.509930   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:17.509987   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:17.543931   67607 cri.go:89] found id: ""
	I0829 20:30:17.543959   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.543968   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:17.543973   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:17.544021   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:17.580944   67607 cri.go:89] found id: ""
	I0829 20:30:17.580972   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.580980   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:17.580986   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:17.581033   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:17.620061   67607 cri.go:89] found id: ""
	I0829 20:30:17.620088   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.620097   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:17.620103   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:17.620148   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:17.658675   67607 cri.go:89] found id: ""
	I0829 20:30:17.658706   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.658717   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:17.658724   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:17.658788   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:17.694424   67607 cri.go:89] found id: ""
	I0829 20:30:17.694453   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.694462   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:17.694467   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:17.694571   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:17.727425   67607 cri.go:89] found id: ""
	I0829 20:30:17.727450   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.727456   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:17.727462   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:17.727510   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:17.767915   67607 cri.go:89] found id: ""
	I0829 20:30:17.767946   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.767956   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:17.767965   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:17.767977   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:17.837556   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:17.837580   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:17.837593   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:17.921601   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:17.921638   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:17.960999   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:17.961026   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:18.013654   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:18.013691   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:15.351372   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:17.850896   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:16.206810   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:18.207702   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:16.993566   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:18.997786   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:21.493705   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:20.528244   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:20.542116   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:20.542190   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:20.578905   67607 cri.go:89] found id: ""
	I0829 20:30:20.578936   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.578947   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:20.578954   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:20.579003   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:20.613543   67607 cri.go:89] found id: ""
	I0829 20:30:20.613567   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.613574   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:20.613579   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:20.613627   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:20.649322   67607 cri.go:89] found id: ""
	I0829 20:30:20.649344   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.649352   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:20.649366   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:20.649429   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:20.684851   67607 cri.go:89] found id: ""
	I0829 20:30:20.684878   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.684886   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:20.684892   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:20.684950   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:20.722016   67607 cri.go:89] found id: ""
	I0829 20:30:20.722045   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.722054   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:20.722062   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:20.722125   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:20.757594   67607 cri.go:89] found id: ""
	I0829 20:30:20.757626   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.757637   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:20.757644   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:20.757707   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:20.793694   67607 cri.go:89] found id: ""
	I0829 20:30:20.793728   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.793738   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:20.793746   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:20.793812   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:20.829709   67607 cri.go:89] found id: ""
	I0829 20:30:20.829736   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.829747   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:20.829758   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:20.829782   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:20.888838   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:20.888888   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:20.903530   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:20.903556   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:20.972460   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:20.972488   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:20.972503   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:21.055556   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:21.055593   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:23.597355   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:23.611091   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:23.611162   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:23.649469   67607 cri.go:89] found id: ""
	I0829 20:30:23.649493   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.649501   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:23.649510   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:23.649562   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:23.684530   67607 cri.go:89] found id: ""
	I0829 20:30:23.684554   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.684561   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:23.684571   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:23.684625   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:23.720466   67607 cri.go:89] found id: ""
	I0829 20:30:23.720493   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.720503   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:23.720510   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:23.720563   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:23.755013   67607 cri.go:89] found id: ""
	I0829 20:30:23.755042   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.755053   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:23.755061   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:23.755127   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:23.795212   67607 cri.go:89] found id: ""
	I0829 20:30:23.795243   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.795254   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:23.795263   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:23.795320   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:20.349781   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:22.350157   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:20.707723   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:23.206214   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:23.994457   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:26.493771   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:23.832912   67607 cri.go:89] found id: ""
	I0829 20:30:23.832941   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.832951   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:23.832959   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:23.833015   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:23.869896   67607 cri.go:89] found id: ""
	I0829 20:30:23.869930   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.869939   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:23.869947   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:23.870011   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:23.908111   67607 cri.go:89] found id: ""
	I0829 20:30:23.908136   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.908145   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:23.908155   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:23.908170   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:23.988489   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:23.988510   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:23.988525   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:24.063246   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:24.063280   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:24.102943   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:24.102974   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:24.157255   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:24.157294   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:26.671966   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:26.684755   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:26.684830   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:26.721125   67607 cri.go:89] found id: ""
	I0829 20:30:26.721150   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.721158   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:26.721164   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:26.721219   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:26.756328   67607 cri.go:89] found id: ""
	I0829 20:30:26.756349   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.756356   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:26.756362   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:26.756420   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:26.791711   67607 cri.go:89] found id: ""
	I0829 20:30:26.791751   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.791763   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:26.791774   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:26.791857   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:26.827215   67607 cri.go:89] found id: ""
	I0829 20:30:26.827244   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.827254   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:26.827261   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:26.827321   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:26.863461   67607 cri.go:89] found id: ""
	I0829 20:30:26.863486   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.863497   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:26.863505   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:26.863569   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:26.900037   67607 cri.go:89] found id: ""
	I0829 20:30:26.900065   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.900075   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:26.900083   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:26.900139   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:26.937236   67607 cri.go:89] found id: ""
	I0829 20:30:26.937263   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.937274   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:26.937282   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:26.937340   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:26.970281   67607 cri.go:89] found id: ""
	I0829 20:30:26.970312   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.970322   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:26.970332   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:26.970345   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:27.041485   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:27.041511   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:27.041526   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:27.120774   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:27.120807   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:27.159656   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:27.159685   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:27.213322   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:27.213356   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:24.350464   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:26.351419   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:28.850079   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:25.207838   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:27.708107   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:28.993552   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:31.494259   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:29.729066   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:29.742044   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:29.742099   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:29.777426   67607 cri.go:89] found id: ""
	I0829 20:30:29.777454   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.777462   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:29.777468   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:29.777529   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:29.814353   67607 cri.go:89] found id: ""
	I0829 20:30:29.814381   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.814392   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:29.814401   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:29.814462   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:29.853754   67607 cri.go:89] found id: ""
	I0829 20:30:29.853783   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.853793   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:29.853801   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:29.853869   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:29.893966   67607 cri.go:89] found id: ""
	I0829 20:30:29.893991   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.893998   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:29.894003   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:29.894057   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:29.929452   67607 cri.go:89] found id: ""
	I0829 20:30:29.929483   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.929492   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:29.929502   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:29.929561   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:29.965880   67607 cri.go:89] found id: ""
	I0829 20:30:29.965906   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.965916   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:29.965924   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:29.965986   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:30.002192   67607 cri.go:89] found id: ""
	I0829 20:30:30.002226   67607 logs.go:276] 0 containers: []
	W0829 20:30:30.002237   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:30.002245   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:30.002320   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:30.037603   67607 cri.go:89] found id: ""
	I0829 20:30:30.037640   67607 logs.go:276] 0 containers: []
	W0829 20:30:30.037651   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:30.037662   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:30.037677   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:30.094128   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:30.094168   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:30.110667   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:30.110701   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:30.188355   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:30.188375   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:30.188388   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:30.270750   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:30.270785   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:32.809472   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:32.823099   67607 kubeadm.go:597] duration metric: took 4m3.15684598s to restartPrimaryControlPlane
	W0829 20:30:32.823188   67607 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 20:30:32.823224   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:30:33.322987   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:30:33.338134   67607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:30:33.348586   67607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:30:33.358672   67607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:30:33.358692   67607 kubeadm.go:157] found existing configuration files:
	
	I0829 20:30:33.358748   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:30:33.367955   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:30:33.368000   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:30:33.377565   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:30:33.386317   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:30:33.386377   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:30:33.396356   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:30:33.406228   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:30:33.406281   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:30:33.418323   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:30:33.427595   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:30:33.427657   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:30:33.437520   67607 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:30:33.511159   67607 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 20:30:33.511279   67607 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:30:33.669988   67607 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:30:33.670133   67607 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:30:33.670267   67607 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 20:30:33.859908   67607 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:30:30.850893   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:32.851574   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:30.207012   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:32.206405   66989 pod_ready.go:82] duration metric: took 4m0.005864609s for pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace to be "Ready" ...
	E0829 20:30:32.206426   66989 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0829 20:30:32.206433   66989 pod_ready.go:39] duration metric: took 4m5.570928284s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:30:32.206448   66989 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:30:32.206482   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:32.206528   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:32.260213   66989 cri.go:89] found id: "f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:32.260242   66989 cri.go:89] found id: ""
	I0829 20:30:32.260252   66989 logs.go:276] 1 containers: [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313]
	I0829 20:30:32.260314   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.265201   66989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:32.265276   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:32.307620   66989 cri.go:89] found id: "5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:32.307648   66989 cri.go:89] found id: ""
	I0829 20:30:32.307656   66989 logs.go:276] 1 containers: [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6]
	I0829 20:30:32.307701   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.312372   66989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:32.312430   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:32.350059   66989 cri.go:89] found id: "64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:32.350092   66989 cri.go:89] found id: ""
	I0829 20:30:32.350102   66989 logs.go:276] 1 containers: [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71]
	I0829 20:30:32.350158   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.354624   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:32.354681   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:32.393968   66989 cri.go:89] found id: "daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:32.393988   66989 cri.go:89] found id: ""
	I0829 20:30:32.393995   66989 logs.go:276] 1 containers: [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334]
	I0829 20:30:32.394039   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.398674   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:32.398745   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:32.433038   66989 cri.go:89] found id: "05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:32.433064   66989 cri.go:89] found id: ""
	I0829 20:30:32.433074   66989 logs.go:276] 1 containers: [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f]
	I0829 20:30:32.433118   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.436969   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:32.437028   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:32.472768   66989 cri.go:89] found id: "29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:32.472786   66989 cri.go:89] found id: ""
	I0829 20:30:32.472793   66989 logs.go:276] 1 containers: [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd]
	I0829 20:30:32.472842   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.477466   66989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:32.477536   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:32.514464   66989 cri.go:89] found id: ""
	I0829 20:30:32.514492   66989 logs.go:276] 0 containers: []
	W0829 20:30:32.514502   66989 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:32.514509   66989 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0829 20:30:32.514591   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0829 20:30:32.551429   66989 cri.go:89] found id: "668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:32.551452   66989 cri.go:89] found id: "585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:32.551456   66989 cri.go:89] found id: ""
	I0829 20:30:32.551463   66989 logs.go:276] 2 containers: [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523]
	I0829 20:30:32.551508   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.555697   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.559864   66989 logs.go:123] Gathering logs for kube-apiserver [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313] ...
	I0829 20:30:32.559883   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:32.609776   66989 logs.go:123] Gathering logs for coredns [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71] ...
	I0829 20:30:32.609803   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:32.648419   66989 logs.go:123] Gathering logs for kube-scheduler [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334] ...
	I0829 20:30:32.648446   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:32.685938   66989 logs.go:123] Gathering logs for storage-provisioner [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c] ...
	I0829 20:30:32.685969   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:32.728665   66989 logs.go:123] Gathering logs for container status ...
	I0829 20:30:32.728693   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:32.770030   66989 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:32.770068   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 20:30:32.907821   66989 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:32.907850   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:32.923119   66989 logs.go:123] Gathering logs for etcd [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6] ...
	I0829 20:30:32.923149   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:32.979819   66989 logs.go:123] Gathering logs for kube-proxy [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f] ...
	I0829 20:30:32.979853   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:33.020472   66989 logs.go:123] Gathering logs for kube-controller-manager [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd] ...
	I0829 20:30:33.020496   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:33.074802   66989 logs.go:123] Gathering logs for storage-provisioner [585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523] ...
	I0829 20:30:33.074838   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:33.112043   66989 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:33.112072   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:33.624274   66989 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:33.624316   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:33.861742   67607 out.go:235]   - Generating certificates and keys ...
	I0829 20:30:33.861849   67607 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:30:33.861946   67607 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:30:33.862075   67607 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:30:33.862174   67607 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:30:33.862276   67607 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:30:33.862366   67607 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:30:33.862467   67607 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:30:33.862573   67607 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:30:33.862794   67607 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:30:33.863226   67607 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:30:33.863323   67607 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:30:33.863417   67607 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:30:34.065914   67607 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:30:34.235581   67607 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:30:34.660452   67607 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:30:34.724718   67607 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:30:34.743897   67607 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:30:34.746263   67607 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:30:34.746369   67607 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:30:34.893824   67607 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:30:33.494825   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:35.994300   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:34.895805   67607 out.go:235]   - Booting up control plane ...
	I0829 20:30:34.895941   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:30:34.904294   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:30:34.915103   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:30:34.915744   67607 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:30:34.917923   67607 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 20:30:35.351975   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:37.352013   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:36.202184   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:36.218838   66989 api_server.go:72] duration metric: took 4m17.334186395s to wait for apiserver process to appear ...
	I0829 20:30:36.218870   66989 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:30:36.218910   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:36.218963   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:36.263205   66989 cri.go:89] found id: "f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:36.263233   66989 cri.go:89] found id: ""
	I0829 20:30:36.263243   66989 logs.go:276] 1 containers: [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313]
	I0829 20:30:36.263292   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.267466   66989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:36.267522   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:36.303894   66989 cri.go:89] found id: "5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:36.303930   66989 cri.go:89] found id: ""
	I0829 20:30:36.303938   66989 logs.go:276] 1 containers: [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6]
	I0829 20:30:36.303996   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.308089   66989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:36.308170   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:36.347320   66989 cri.go:89] found id: "64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:36.347392   66989 cri.go:89] found id: ""
	I0829 20:30:36.347414   66989 logs.go:276] 1 containers: [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71]
	I0829 20:30:36.347485   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.352121   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:36.352174   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:36.389760   66989 cri.go:89] found id: "daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:36.389784   66989 cri.go:89] found id: ""
	I0829 20:30:36.389793   66989 logs.go:276] 1 containers: [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334]
	I0829 20:30:36.389853   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.394860   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:36.394919   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:36.430562   66989 cri.go:89] found id: "05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:36.430587   66989 cri.go:89] found id: ""
	I0829 20:30:36.430597   66989 logs.go:276] 1 containers: [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f]
	I0829 20:30:36.430655   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.435151   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:36.435226   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:36.470714   66989 cri.go:89] found id: "29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:36.470742   66989 cri.go:89] found id: ""
	I0829 20:30:36.470750   66989 logs.go:276] 1 containers: [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd]
	I0829 20:30:36.470816   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.475382   66989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:36.475446   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:36.514853   66989 cri.go:89] found id: ""
	I0829 20:30:36.514888   66989 logs.go:276] 0 containers: []
	W0829 20:30:36.514898   66989 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:36.514910   66989 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0829 20:30:36.514971   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0829 20:30:36.548229   66989 cri.go:89] found id: "668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:36.548252   66989 cri.go:89] found id: "585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:36.548256   66989 cri.go:89] found id: ""
	I0829 20:30:36.548263   66989 logs.go:276] 2 containers: [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523]
	I0829 20:30:36.548314   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.552484   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.556661   66989 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:36.556681   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:36.622985   66989 logs.go:123] Gathering logs for etcd [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6] ...
	I0829 20:30:36.623019   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:36.678770   66989 logs.go:123] Gathering logs for kube-controller-manager [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd] ...
	I0829 20:30:36.678799   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:36.731822   66989 logs.go:123] Gathering logs for storage-provisioner [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c] ...
	I0829 20:30:36.731849   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:36.768451   66989 logs.go:123] Gathering logs for storage-provisioner [585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523] ...
	I0829 20:30:36.768482   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:36.803818   66989 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:36.803846   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:37.225805   66989 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:37.225849   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:37.245421   66989 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:37.245458   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 20:30:37.358238   66989 logs.go:123] Gathering logs for kube-apiserver [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313] ...
	I0829 20:30:37.358266   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:37.401876   66989 logs.go:123] Gathering logs for coredns [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71] ...
	I0829 20:30:37.401913   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:37.438189   66989 logs.go:123] Gathering logs for kube-scheduler [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334] ...
	I0829 20:30:37.438223   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:37.475404   66989 logs.go:123] Gathering logs for kube-proxy [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f] ...
	I0829 20:30:37.475433   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:37.511876   66989 logs.go:123] Gathering logs for container status ...
	I0829 20:30:37.511903   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:38.493604   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:40.494396   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:40.054097   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:30:40.058474   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0829 20:30:40.059830   66989 api_server.go:141] control plane version: v1.31.0
	I0829 20:30:40.059850   66989 api_server.go:131] duration metric: took 3.840972907s to wait for apiserver health ...
	I0829 20:30:40.059857   66989 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:30:40.059877   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:40.059924   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:40.101978   66989 cri.go:89] found id: "f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:40.102003   66989 cri.go:89] found id: ""
	I0829 20:30:40.102013   66989 logs.go:276] 1 containers: [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313]
	I0829 20:30:40.102073   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.107429   66989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:40.107496   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:40.145052   66989 cri.go:89] found id: "5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:40.145078   66989 cri.go:89] found id: ""
	I0829 20:30:40.145086   66989 logs.go:276] 1 containers: [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6]
	I0829 20:30:40.145133   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.149329   66989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:40.149394   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:40.187740   66989 cri.go:89] found id: "64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:40.187769   66989 cri.go:89] found id: ""
	I0829 20:30:40.187778   66989 logs.go:276] 1 containers: [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71]
	I0829 20:30:40.187838   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.192085   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:40.192156   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:40.231992   66989 cri.go:89] found id: "daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:40.232010   66989 cri.go:89] found id: ""
	I0829 20:30:40.232017   66989 logs.go:276] 1 containers: [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334]
	I0829 20:30:40.232060   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.236275   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:40.236333   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:40.279637   66989 cri.go:89] found id: "05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:40.279660   66989 cri.go:89] found id: ""
	I0829 20:30:40.279669   66989 logs.go:276] 1 containers: [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f]
	I0829 20:30:40.279727   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.288800   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:40.288876   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:40.341222   66989 cri.go:89] found id: "29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:40.341248   66989 cri.go:89] found id: ""
	I0829 20:30:40.341258   66989 logs.go:276] 1 containers: [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd]
	I0829 20:30:40.341322   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.346013   66989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:40.346088   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:40.383801   66989 cri.go:89] found id: ""
	I0829 20:30:40.383828   66989 logs.go:276] 0 containers: []
	W0829 20:30:40.383836   66989 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:40.383842   66989 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0829 20:30:40.383896   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0829 20:30:40.421847   66989 cri.go:89] found id: "668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:40.421874   66989 cri.go:89] found id: "585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:40.421879   66989 cri.go:89] found id: ""
	I0829 20:30:40.421889   66989 logs.go:276] 2 containers: [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523]
	I0829 20:30:40.421950   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.426229   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.429902   66989 logs.go:123] Gathering logs for storage-provisioner [585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523] ...
	I0829 20:30:40.429931   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:40.471015   66989 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:40.471039   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:40.831575   66989 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:40.831612   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:40.846195   66989 logs.go:123] Gathering logs for etcd [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6] ...
	I0829 20:30:40.846230   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:40.905469   66989 logs.go:123] Gathering logs for kube-scheduler [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334] ...
	I0829 20:30:40.905507   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:40.952303   66989 logs.go:123] Gathering logs for kube-proxy [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f] ...
	I0829 20:30:40.952337   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:41.001278   66989 logs.go:123] Gathering logs for kube-controller-manager [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd] ...
	I0829 20:30:41.001309   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:41.071045   66989 logs.go:123] Gathering logs for container status ...
	I0829 20:30:41.071089   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:41.120024   66989 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:41.120050   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:41.191412   66989 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:41.191445   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 20:30:41.321848   66989 logs.go:123] Gathering logs for kube-apiserver [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313] ...
	I0829 20:30:41.321874   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:41.370807   66989 logs.go:123] Gathering logs for coredns [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71] ...
	I0829 20:30:41.370833   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:41.405913   66989 logs.go:123] Gathering logs for storage-provisioner [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c] ...
	I0829 20:30:41.405939   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:43.948957   66989 system_pods.go:59] 8 kube-system pods found
	I0829 20:30:43.948987   66989 system_pods.go:61] "coredns-6f6b679f8f-dg6t6" [92e89b20-ebf4-4738-8ca7-9dc2a0e5653a] Running
	I0829 20:30:43.948992   66989 system_pods.go:61] "etcd-embed-certs-388383" [a688325a-9ed2-488d-a1a1-aa440e37fa9f] Running
	I0829 20:30:43.948996   66989 system_pods.go:61] "kube-apiserver-embed-certs-388383" [7a1b715b-87a3-44e0-868d-a3184f5b9f61] Running
	I0829 20:30:43.948999   66989 system_pods.go:61] "kube-controller-manager-embed-certs-388383" [9d942083-4d39-448c-8151-424ea9d5e6af] Running
	I0829 20:30:43.949003   66989 system_pods.go:61] "kube-proxy-fcxs4" [649b40c8-4f4b-40d1-8179-baf378d4c7d7] Running
	I0829 20:30:43.949006   66989 system_pods.go:61] "kube-scheduler-embed-certs-388383" [87b73013-dfad-411d-aaa9-f2c0e39fb920] Running
	I0829 20:30:43.949011   66989 system_pods.go:61] "metrics-server-6867b74b74-mx5jh" [99e21acd-b7b8-4e6f-8c75-c112206aed89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:30:43.949015   66989 system_pods.go:61] "storage-provisioner" [021ca156-b7a8-4647-8efe-db17968fd5a8] Running
	I0829 20:30:43.949022   66989 system_pods.go:74] duration metric: took 3.889159839s to wait for pod list to return data ...
	I0829 20:30:43.949028   66989 default_sa.go:34] waiting for default service account to be created ...
	I0829 20:30:43.951906   66989 default_sa.go:45] found service account: "default"
	I0829 20:30:43.951932   66989 default_sa.go:55] duration metric: took 2.897769ms for default service account to be created ...
	I0829 20:30:43.951943   66989 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 20:30:43.959246   66989 system_pods.go:86] 8 kube-system pods found
	I0829 20:30:43.959269   66989 system_pods.go:89] "coredns-6f6b679f8f-dg6t6" [92e89b20-ebf4-4738-8ca7-9dc2a0e5653a] Running
	I0829 20:30:43.959275   66989 system_pods.go:89] "etcd-embed-certs-388383" [a688325a-9ed2-488d-a1a1-aa440e37fa9f] Running
	I0829 20:30:43.959279   66989 system_pods.go:89] "kube-apiserver-embed-certs-388383" [7a1b715b-87a3-44e0-868d-a3184f5b9f61] Running
	I0829 20:30:43.959283   66989 system_pods.go:89] "kube-controller-manager-embed-certs-388383" [9d942083-4d39-448c-8151-424ea9d5e6af] Running
	I0829 20:30:43.959286   66989 system_pods.go:89] "kube-proxy-fcxs4" [649b40c8-4f4b-40d1-8179-baf378d4c7d7] Running
	I0829 20:30:43.959290   66989 system_pods.go:89] "kube-scheduler-embed-certs-388383" [87b73013-dfad-411d-aaa9-f2c0e39fb920] Running
	I0829 20:30:43.959296   66989 system_pods.go:89] "metrics-server-6867b74b74-mx5jh" [99e21acd-b7b8-4e6f-8c75-c112206aed89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:30:43.959302   66989 system_pods.go:89] "storage-provisioner" [021ca156-b7a8-4647-8efe-db17968fd5a8] Running
	I0829 20:30:43.959309   66989 system_pods.go:126] duration metric: took 7.361244ms to wait for k8s-apps to be running ...
	I0829 20:30:43.959318   66989 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 20:30:43.959356   66989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:30:43.976136   66989 system_svc.go:56] duration metric: took 16.811475ms WaitForService to wait for kubelet
	I0829 20:30:43.976167   66989 kubeadm.go:582] duration metric: took 4m25.091518378s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:30:43.976193   66989 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:30:43.979345   66989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:30:43.979376   66989 node_conditions.go:123] node cpu capacity is 2
	I0829 20:30:43.979386   66989 node_conditions.go:105] duration metric: took 3.187489ms to run NodePressure ...
	I0829 20:30:43.979396   66989 start.go:241] waiting for startup goroutines ...
	I0829 20:30:43.979402   66989 start.go:246] waiting for cluster config update ...
	I0829 20:30:43.979414   66989 start.go:255] writing updated cluster config ...
	I0829 20:30:43.979729   66989 ssh_runner.go:195] Run: rm -f paused
	I0829 20:30:44.028715   66989 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 20:30:44.030675   66989 out.go:177] * Done! kubectl is now configured to use "embed-certs-388383" cluster and "default" namespace by default
	I0829 20:30:39.850811   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:41.850941   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:42.993711   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:45.492729   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:44.351171   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:46.849842   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:48.851125   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:47.494031   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:49.993291   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:51.350926   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:53.850966   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:52.494604   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:54.994054   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:56.350237   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:58.856068   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:56.994483   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:59.494879   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:01.351293   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:03.850415   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:01.994470   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:04.493393   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:05.851663   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:08.350513   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:06.988349   68084 pod_ready.go:82] duration metric: took 4m0.000994859s for pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace to be "Ready" ...
	E0829 20:31:06.988378   68084 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 20:31:06.988396   68084 pod_ready.go:39] duration metric: took 4m13.5587561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:31:06.988421   68084 kubeadm.go:597] duration metric: took 4m20.63419422s to restartPrimaryControlPlane
	W0829 20:31:06.988470   68084 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 20:31:06.988492   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:31:10.350782   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:12.851120   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:14.919490   67607 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 20:31:14.920124   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:14.920395   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:15.350794   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:17.351675   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:19.920740   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:19.920993   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:19.858714   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:22.351208   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:24.851679   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:27.351087   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:33.177614   68084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.189095849s)
	I0829 20:31:33.177712   68084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:31:33.202840   68084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:31:33.220648   68084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:31:33.239458   68084 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:31:33.239479   68084 kubeadm.go:157] found existing configuration files:
	
	I0829 20:31:33.239519   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0829 20:31:33.257831   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:31:33.257900   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:31:33.272621   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0829 20:31:33.287906   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:31:33.287975   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:31:33.302931   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0829 20:31:33.312359   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:31:33.312411   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:31:33.322850   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0829 20:31:33.332224   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:31:33.332280   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:31:33.342072   68084 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:31:33.388790   68084 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 20:31:33.388844   68084 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:31:33.506108   68084 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:31:33.506263   68084 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:31:33.506403   68084 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 20:31:33.515467   68084 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:31:29.921355   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:29.921591   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:29.351212   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:31.351683   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:33.850337   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:33.517487   68084 out.go:235]   - Generating certificates and keys ...
	I0829 20:31:33.517590   68084 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:31:33.517697   68084 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:31:33.517809   68084 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:31:33.517907   68084 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:31:33.518009   68084 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:31:33.518086   68084 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:31:33.518174   68084 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:31:33.518266   68084 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:31:33.518379   68084 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:31:33.518495   68084 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:31:33.518567   68084 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:31:33.518656   68084 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:31:33.888310   68084 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:31:34.000803   68084 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 20:31:34.103016   68084 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:31:34.461677   68084 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:31:34.617814   68084 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:31:34.618316   68084 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:31:34.622440   68084 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:31:34.624324   68084 out.go:235]   - Booting up control plane ...
	I0829 20:31:34.624428   68084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:31:34.624527   68084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:31:34.624882   68084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:31:34.647388   68084 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:31:34.653776   68084 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:31:34.653864   68084 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:31:34.795338   68084 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 20:31:34.795463   68084 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 20:31:35.797126   68084 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001854627s
	I0829 20:31:35.797253   68084 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 20:31:35.852495   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:37.344608   66841 pod_ready.go:82] duration metric: took 4m0.000461851s for pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace to be "Ready" ...
	E0829 20:31:37.344637   66841 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 20:31:37.344661   66841 pod_ready.go:39] duration metric: took 4m13.033970527s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:31:37.344693   66841 kubeadm.go:597] duration metric: took 4m20.095743839s to restartPrimaryControlPlane
	W0829 20:31:37.344752   66841 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 20:31:37.344780   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:31:40.799092   68084 kubeadm.go:310] [api-check] The API server is healthy after 5.002121632s
	I0829 20:31:40.813865   68084 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 20:31:40.829677   68084 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 20:31:40.870324   68084 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 20:31:40.870598   68084 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-145096 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 20:31:40.889024   68084 kubeadm.go:310] [bootstrap-token] Using token: gy9sl5.6oyya9sd2gbep67e
	I0829 20:31:40.890947   68084 out.go:235]   - Configuring RBAC rules ...
	I0829 20:31:40.891083   68084 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 20:31:40.898748   68084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 20:31:40.912914   68084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 20:31:40.916739   68084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 20:31:40.923995   68084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 20:31:40.930447   68084 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 20:31:41.206632   68084 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 20:31:41.679673   68084 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 20:31:42.206707   68084 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 20:31:42.206733   68084 kubeadm.go:310] 
	I0829 20:31:42.206819   68084 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 20:31:42.206830   68084 kubeadm.go:310] 
	I0829 20:31:42.206974   68084 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 20:31:42.206996   68084 kubeadm.go:310] 
	I0829 20:31:42.207018   68084 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 20:31:42.207073   68084 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 20:31:42.207120   68084 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 20:31:42.207127   68084 kubeadm.go:310] 
	I0829 20:31:42.207189   68084 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 20:31:42.207196   68084 kubeadm.go:310] 
	I0829 20:31:42.207234   68084 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 20:31:42.207238   68084 kubeadm.go:310] 
	I0829 20:31:42.207285   68084 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 20:31:42.207382   68084 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 20:31:42.207473   68084 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 20:31:42.207484   68084 kubeadm.go:310] 
	I0829 20:31:42.207611   68084 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 20:31:42.207727   68084 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 20:31:42.207736   68084 kubeadm.go:310] 
	I0829 20:31:42.207854   68084 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token gy9sl5.6oyya9sd2gbep67e \
	I0829 20:31:42.207962   68084 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef \
	I0829 20:31:42.207983   68084 kubeadm.go:310] 	--control-plane 
	I0829 20:31:42.207986   68084 kubeadm.go:310] 
	I0829 20:31:42.208087   68084 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 20:31:42.208106   68084 kubeadm.go:310] 
	I0829 20:31:42.208214   68084 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token gy9sl5.6oyya9sd2gbep67e \
	I0829 20:31:42.208342   68084 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef 
	I0829 20:31:42.209248   68084 kubeadm.go:310] W0829 20:31:33.349141    2513 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 20:31:42.209595   68084 kubeadm.go:310] W0829 20:31:33.349919    2513 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 20:31:42.209769   68084 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:31:42.209803   68084 cni.go:84] Creating CNI manager for ""
	I0829 20:31:42.209817   68084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:31:42.211545   68084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:31:42.212889   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:31:42.223984   68084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:31:42.242703   68084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 20:31:42.242779   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-145096 minikube.k8s.io/updated_at=2024_08_29T20_31_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033 minikube.k8s.io/name=default-k8s-diff-port-145096 minikube.k8s.io/primary=true
	I0829 20:31:42.242779   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:42.448824   68084 ops.go:34] apiserver oom_adj: -16
	I0829 20:31:42.453004   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:42.953891   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:43.453922   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:43.953465   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:44.453647   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:44.954035   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:45.453660   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:45.953536   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:46.046900   68084 kubeadm.go:1113] duration metric: took 3.804195127s to wait for elevateKubeSystemPrivileges
	I0829 20:31:46.046927   68084 kubeadm.go:394] duration metric: took 4m59.74590678s to StartCluster
	I0829 20:31:46.046947   68084 settings.go:142] acquiring lock: {Name:mka4cd5ddff5796cd0ca11509c181178f4f73529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:31:46.047046   68084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:31:46.048617   68084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:31:46.048876   68084 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 20:31:46.048979   68084 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 20:31:46.049063   68084 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-145096"
	I0829 20:31:46.049090   68084 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-145096"
	I0829 20:31:46.049090   68084 config.go:182] Loaded profile config "default-k8s-diff-port-145096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:31:46.049099   68084 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-145096"
	I0829 20:31:46.049136   68084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-145096"
	W0829 20:31:46.049143   68084 addons.go:243] addon storage-provisioner should already be in state true
	I0829 20:31:46.049174   68084 host.go:66] Checking if "default-k8s-diff-port-145096" exists ...
	I0829 20:31:46.049104   68084 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-145096"
	I0829 20:31:46.049264   68084 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-145096"
	W0829 20:31:46.049280   68084 addons.go:243] addon metrics-server should already be in state true
	I0829 20:31:46.049335   68084 host.go:66] Checking if "default-k8s-diff-port-145096" exists ...
	I0829 20:31:46.049569   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.049574   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.049595   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.049599   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.049698   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.049722   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.050441   68084 out.go:177] * Verifying Kubernetes components...
	I0829 20:31:46.052039   68084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:31:46.065735   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39367
	I0829 20:31:46.065909   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32931
	I0829 20:31:46.066241   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.066344   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.066900   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.066918   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.067024   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.067045   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.067438   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.067481   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.067665   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:31:46.067902   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.067931   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.069157   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41005
	I0829 20:31:46.070637   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.070757   68084 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-145096"
	W0829 20:31:46.070771   68084 addons.go:243] addon default-storageclass should already be in state true
	I0829 20:31:46.070803   68084 host.go:66] Checking if "default-k8s-diff-port-145096" exists ...
	I0829 20:31:46.071118   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.071124   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.071132   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.071155   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.071510   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.072052   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.072095   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.085524   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39387
	I0829 20:31:46.085987   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.086553   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.086576   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.086966   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.087138   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:31:46.087202   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43235
	I0829 20:31:46.087621   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.088358   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.088381   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.088708   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.088806   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:31:46.089193   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.089363   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.090878   68084 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:31:46.091571   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42413
	I0829 20:31:46.092208   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.092291   68084 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:31:46.092316   68084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 20:31:46.092337   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:31:46.092660   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.092687   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.093044   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.093230   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:31:46.095184   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:31:46.096265   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.096792   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:31:46.096821   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.097088   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:31:46.097274   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:31:46.097433   68084 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 20:31:46.097448   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:31:46.097645   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:31:46.098681   68084 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 20:31:46.098697   68084 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 20:31:46.098715   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:31:46.101604   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.101993   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:31:46.102014   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.102328   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:31:46.102529   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:31:46.102687   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:31:46.102847   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:31:46.108154   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32805
	I0829 20:31:46.108627   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.109111   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.109129   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.109446   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.109675   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:31:46.111174   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:31:46.111440   68084 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 20:31:46.111452   68084 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 20:31:46.111469   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:31:46.114302   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.114805   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:31:46.114832   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.114921   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:31:46.115102   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:31:46.115256   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:31:46.115400   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:31:46.277748   68084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:31:46.297001   68084 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-145096" to be "Ready" ...
	I0829 20:31:46.317473   68084 node_ready.go:49] node "default-k8s-diff-port-145096" has status "Ready":"True"
	I0829 20:31:46.317498   68084 node_ready.go:38] duration metric: took 20.469679ms for node "default-k8s-diff-port-145096" to be "Ready" ...
	I0829 20:31:46.317509   68084 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:31:46.332180   68084 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:46.393588   68084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:31:46.399404   68084 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 20:31:46.399428   68084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 20:31:46.453014   68084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 20:31:46.460100   68084 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 20:31:46.460126   68084 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 20:31:46.541980   68084 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:31:46.542002   68084 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 20:31:46.607148   68084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:31:47.296344   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.296370   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.296445   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.296471   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.296678   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.296722   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.296744   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.296764   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.298376   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:47.298379   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.298404   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.298412   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:47.298420   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.298436   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.298453   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.298464   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.298700   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.298726   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:47.298729   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.318720   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.318745   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.319031   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:47.319053   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.319069   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.870171   68084 pod_ready.go:93] pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:31:47.870198   68084 pod_ready.go:82] duration metric: took 1.537994965s for pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:47.870208   68084 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:48.057308   68084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.450120563s)
	I0829 20:31:48.057358   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:48.057371   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:48.057667   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:48.057722   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:48.057734   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:48.057747   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:48.057759   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:48.057989   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:48.058005   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:48.058021   68084 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-145096"
	I0829 20:31:48.059886   68084 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0829 20:31:48.061124   68084 addons.go:510] duration metric: took 2.012141801s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0829 20:31:48.875874   68084 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:31:48.875897   68084 pod_ready.go:82] duration metric: took 1.005682325s for pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:48.875912   68084 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:48.879828   68084 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:31:48.879846   68084 pod_ready.go:82] duration metric: took 3.928263ms for pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:48.879863   68084 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:50.886764   68084 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:49.922318   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:49.922554   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:52.887708   68084 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:55.387571   68084 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:55.886194   68084 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:31:55.886217   68084 pod_ready.go:82] duration metric: took 7.006347256s for pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:55.886225   68084 pod_ready.go:39] duration metric: took 9.568704494s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:31:55.886238   68084 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:31:55.886286   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:31:55.901604   68084 api_server.go:72] duration metric: took 9.852691692s to wait for apiserver process to appear ...
	I0829 20:31:55.901628   68084 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:31:55.901643   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:31:55.905564   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 200:
	ok
	I0829 20:31:55.906387   68084 api_server.go:141] control plane version: v1.31.0
	I0829 20:31:55.906406   68084 api_server.go:131] duration metric: took 4.772472ms to wait for apiserver health ...
	I0829 20:31:55.906413   68084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:31:55.911423   68084 system_pods.go:59] 9 kube-system pods found
	I0829 20:31:55.911444   68084 system_pods.go:61] "coredns-6f6b679f8f-l25kd" [86947930-0d47-407a-b876-b482596fbe8f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:31:55.911451   68084 system_pods.go:61] "coredns-6f6b679f8f-lnm92" [a6caefe0-e883-4460-87de-25ee97191e1a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:31:55.911458   68084 system_pods.go:61] "etcd-default-k8s-diff-port-145096" [caba3f17-6544-4fe0-8dd3-0dd95e8df8ce] Running
	I0829 20:31:55.911465   68084 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-145096" [9b1ca00a-613b-414f-81e9-601d53d43207] Running
	I0829 20:31:55.911470   68084 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-145096" [e7145779-85cf-458d-9870-6fda4853d29d] Running
	I0829 20:31:55.911479   68084 system_pods.go:61] "kube-proxy-ptswc" [96c01414-e8e8-4731-824b-11d636285fb3] Running
	I0829 20:31:55.911488   68084 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-145096" [0d2cc607-72ac-4417-8a7c-196bf3ec90d7] Running
	I0829 20:31:55.911495   68084 system_pods.go:61] "metrics-server-6867b74b74-6sdqg" [2c9efadb-89bb-4aa6-b0f0-ddcb3e931674] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:31:55.911503   68084 system_pods.go:61] "storage-provisioner" [81531989-d045-44fb-b1a1-0817af27c804] Running
	I0829 20:31:55.911512   68084 system_pods.go:74] duration metric: took 5.092824ms to wait for pod list to return data ...
	I0829 20:31:55.911523   68084 default_sa.go:34] waiting for default service account to be created ...
	I0829 20:31:55.913794   68084 default_sa.go:45] found service account: "default"
	I0829 20:31:55.913820   68084 default_sa.go:55] duration metric: took 2.286925ms for default service account to be created ...
	I0829 20:31:55.913830   68084 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 20:31:55.919628   68084 system_pods.go:86] 9 kube-system pods found
	I0829 20:31:55.919666   68084 system_pods.go:89] "coredns-6f6b679f8f-l25kd" [86947930-0d47-407a-b876-b482596fbe8f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:31:55.919677   68084 system_pods.go:89] "coredns-6f6b679f8f-lnm92" [a6caefe0-e883-4460-87de-25ee97191e1a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:31:55.919686   68084 system_pods.go:89] "etcd-default-k8s-diff-port-145096" [caba3f17-6544-4fe0-8dd3-0dd95e8df8ce] Running
	I0829 20:31:55.919693   68084 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-145096" [9b1ca00a-613b-414f-81e9-601d53d43207] Running
	I0829 20:31:55.919699   68084 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-145096" [e7145779-85cf-458d-9870-6fda4853d29d] Running
	I0829 20:31:55.919704   68084 system_pods.go:89] "kube-proxy-ptswc" [96c01414-e8e8-4731-824b-11d636285fb3] Running
	I0829 20:31:55.919710   68084 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-145096" [0d2cc607-72ac-4417-8a7c-196bf3ec90d7] Running
	I0829 20:31:55.919718   68084 system_pods.go:89] "metrics-server-6867b74b74-6sdqg" [2c9efadb-89bb-4aa6-b0f0-ddcb3e931674] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:31:55.919725   68084 system_pods.go:89] "storage-provisioner" [81531989-d045-44fb-b1a1-0817af27c804] Running
	I0829 20:31:55.919734   68084 system_pods.go:126] duration metric: took 5.897752ms to wait for k8s-apps to be running ...
	I0829 20:31:55.919745   68084 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 20:31:55.919800   68084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:31:55.935429   68084 system_svc.go:56] duration metric: took 15.676316ms WaitForService to wait for kubelet
	I0829 20:31:55.935460   68084 kubeadm.go:582] duration metric: took 9.886551311s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:31:55.935483   68084 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:31:55.938444   68084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:31:55.938466   68084 node_conditions.go:123] node cpu capacity is 2
	I0829 20:31:55.938476   68084 node_conditions.go:105] duration metric: took 2.988434ms to run NodePressure ...
	I0829 20:31:55.938486   68084 start.go:241] waiting for startup goroutines ...
	I0829 20:31:55.938493   68084 start.go:246] waiting for cluster config update ...
	I0829 20:31:55.938503   68084 start.go:255] writing updated cluster config ...
	I0829 20:31:55.938834   68084 ssh_runner.go:195] Run: rm -f paused
	I0829 20:31:55.987879   68084 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 20:31:55.989766   68084 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-145096" cluster and "default" namespace by default
	I0829 20:32:03.506190   66841 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.161387814s)
	I0829 20:32:03.506268   66841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:32:03.530660   66841 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:32:03.550784   66841 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:32:03.565054   66841 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:32:03.565085   66841 kubeadm.go:157] found existing configuration files:
	
	I0829 20:32:03.565131   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:32:03.586492   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:32:03.586577   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:32:03.605061   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:32:03.617990   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:32:03.618054   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:32:03.635587   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:32:03.645495   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:32:03.645559   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:32:03.655081   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:32:03.664640   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:32:03.664703   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:32:03.674097   66841 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:32:03.721087   66841 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 20:32:03.721155   66841 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:32:03.839829   66841 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:32:03.839985   66841 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:32:03.840079   66841 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 20:32:03.849047   66841 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:32:03.850883   66841 out.go:235]   - Generating certificates and keys ...
	I0829 20:32:03.850970   66841 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:32:03.851045   66841 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:32:03.851129   66841 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:32:03.851222   66841 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:32:03.851292   66841 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:32:03.851340   66841 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:32:03.851399   66841 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:32:03.851450   66841 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:32:03.851515   66841 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:32:03.851620   66841 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:32:03.851687   66841 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:32:03.851755   66841 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:32:03.968189   66841 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:32:04.253016   66841 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 20:32:04.341190   66841 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:32:04.491607   66841 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:32:04.616753   66841 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:32:04.617354   66841 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:32:04.619961   66841 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:32:04.621690   66841 out.go:235]   - Booting up control plane ...
	I0829 20:32:04.621799   66841 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:32:04.621910   66841 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:32:04.622021   66841 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:32:04.643758   66841 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:32:04.650541   66841 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:32:04.650612   66841 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:32:04.786596   66841 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 20:32:04.786755   66841 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 20:32:05.788381   66841 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001614523s
	I0829 20:32:05.788512   66841 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 20:32:10.789752   66841 kubeadm.go:310] [api-check] The API server is healthy after 5.001571241s
	I0829 20:32:10.803237   66841 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 20:32:10.822640   66841 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 20:32:10.845744   66841 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 20:32:10.846050   66841 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-397724 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 20:32:10.856315   66841 kubeadm.go:310] [bootstrap-token] Using token: 3k2s43.7gy6mzkt91kkied7
	I0829 20:32:10.857834   66841 out.go:235]   - Configuring RBAC rules ...
	I0829 20:32:10.857947   66841 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 20:32:10.867339   66841 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 20:32:10.876522   66841 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 20:32:10.879786   66841 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 20:32:10.885043   66841 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 20:32:10.892077   66841 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 20:32:11.196796   66841 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 20:32:11.630072   66841 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 20:32:12.200197   66841 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 20:32:12.200232   66841 kubeadm.go:310] 
	I0829 20:32:12.200314   66841 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 20:32:12.200326   66841 kubeadm.go:310] 
	I0829 20:32:12.200406   66841 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 20:32:12.200416   66841 kubeadm.go:310] 
	I0829 20:32:12.200450   66841 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 20:32:12.200536   66841 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 20:32:12.200606   66841 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 20:32:12.200616   66841 kubeadm.go:310] 
	I0829 20:32:12.200687   66841 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 20:32:12.200700   66841 kubeadm.go:310] 
	I0829 20:32:12.200744   66841 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 20:32:12.200750   66841 kubeadm.go:310] 
	I0829 20:32:12.200793   66841 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 20:32:12.200861   66841 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 20:32:12.200918   66841 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 20:32:12.200924   66841 kubeadm.go:310] 
	I0829 20:32:12.201048   66841 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 20:32:12.201144   66841 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 20:32:12.201152   66841 kubeadm.go:310] 
	I0829 20:32:12.201255   66841 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3k2s43.7gy6mzkt91kkied7 \
	I0829 20:32:12.201373   66841 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef \
	I0829 20:32:12.201400   66841 kubeadm.go:310] 	--control-plane 
	I0829 20:32:12.201411   66841 kubeadm.go:310] 
	I0829 20:32:12.201487   66841 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 20:32:12.201495   66841 kubeadm.go:310] 
	I0829 20:32:12.201574   66841 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3k2s43.7gy6mzkt91kkied7 \
	I0829 20:32:12.201710   66841 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef 
	I0829 20:32:12.202900   66841 kubeadm.go:310] W0829 20:32:03.691334    3057 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 20:32:12.203223   66841 kubeadm.go:310] W0829 20:32:03.692151    3057 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 20:32:12.203339   66841 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:32:12.203366   66841 cni.go:84] Creating CNI manager for ""
	I0829 20:32:12.203381   66841 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:32:12.205733   66841 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:32:12.206905   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:32:12.218121   66841 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:32:12.237885   66841 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 20:32:12.237989   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:12.238006   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-397724 minikube.k8s.io/updated_at=2024_08_29T20_32_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033 minikube.k8s.io/name=no-preload-397724 minikube.k8s.io/primary=true
	I0829 20:32:12.282191   66841 ops.go:34] apiserver oom_adj: -16
	I0829 20:32:12.430006   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:12.930327   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:13.430210   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:13.930065   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:14.430163   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:14.930189   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:15.430677   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:15.930670   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:16.430943   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:16.549095   66841 kubeadm.go:1113] duration metric: took 4.311165714s to wait for elevateKubeSystemPrivileges
	I0829 20:32:16.549136   66841 kubeadm.go:394] duration metric: took 4m59.355577107s to StartCluster
	I0829 20:32:16.549156   66841 settings.go:142] acquiring lock: {Name:mka4cd5ddff5796cd0ca11509c181178f4f73529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:32:16.549229   66841 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:32:16.550926   66841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:32:16.551141   66841 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 20:32:16.551202   66841 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 20:32:16.551291   66841 addons.go:69] Setting storage-provisioner=true in profile "no-preload-397724"
	I0829 20:32:16.551315   66841 addons.go:69] Setting default-storageclass=true in profile "no-preload-397724"
	I0829 20:32:16.551329   66841 config.go:182] Loaded profile config "no-preload-397724": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:32:16.551340   66841 addons.go:69] Setting metrics-server=true in profile "no-preload-397724"
	I0829 20:32:16.551389   66841 addons.go:234] Setting addon metrics-server=true in "no-preload-397724"
	W0829 20:32:16.551404   66841 addons.go:243] addon metrics-server should already be in state true
	I0829 20:32:16.551442   66841 host.go:66] Checking if "no-preload-397724" exists ...
	I0829 20:32:16.551360   66841 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-397724"
	I0829 20:32:16.551324   66841 addons.go:234] Setting addon storage-provisioner=true in "no-preload-397724"
	W0829 20:32:16.551673   66841 addons.go:243] addon storage-provisioner should already be in state true
	I0829 20:32:16.551705   66841 host.go:66] Checking if "no-preload-397724" exists ...
	I0829 20:32:16.551872   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.551873   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.551908   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.551929   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.552036   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.552065   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.552634   66841 out.go:177] * Verifying Kubernetes components...
	I0829 20:32:16.553973   66841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:32:16.567797   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43335
	I0829 20:32:16.568321   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.568884   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.568910   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.569328   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.569941   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.569978   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.573055   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40673
	I0829 20:32:16.573642   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36399
	I0829 20:32:16.573770   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.574303   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.574321   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.574394   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.574913   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.574933   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.574935   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.575471   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.575511   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.575724   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.575950   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:32:16.579912   66841 addons.go:234] Setting addon default-storageclass=true in "no-preload-397724"
	W0829 20:32:16.579932   66841 addons.go:243] addon default-storageclass should already be in state true
	I0829 20:32:16.579960   66841 host.go:66] Checking if "no-preload-397724" exists ...
	I0829 20:32:16.580281   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.580298   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.591264   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42469
	I0829 20:32:16.591442   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42753
	I0829 20:32:16.591777   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.591827   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.592275   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.592289   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.592289   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.592307   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.592702   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.592726   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.592881   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:32:16.592882   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:32:16.594494   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:32:16.594956   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:32:16.596431   66841 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:32:16.596433   66841 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 20:32:16.597503   66841 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 20:32:16.597524   66841 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 20:32:16.597547   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:32:16.597607   66841 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:32:16.597625   66841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 20:32:16.597641   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:32:16.598780   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32841
	I0829 20:32:16.599272   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.599915   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.599937   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.601210   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.601613   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.601965   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.602159   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:32:16.602190   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.602328   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.602867   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.602998   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:32:16.603188   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:32:16.603234   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:32:16.603287   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.603434   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:32:16.603487   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:32:16.603691   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:32:16.603708   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:32:16.603857   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:32:16.603977   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:32:16.619336   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37683
	I0829 20:32:16.619806   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.620269   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.620286   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.620604   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.620818   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:32:16.622348   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:32:16.622563   66841 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 20:32:16.622580   66841 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 20:32:16.622597   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:32:16.625203   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.625542   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:32:16.625570   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.625746   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:32:16.625934   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:32:16.626094   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:32:16.626266   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:32:16.787525   66841 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:32:16.817674   66841 node_ready.go:35] waiting up to 6m0s for node "no-preload-397724" to be "Ready" ...
	I0829 20:32:16.833992   66841 node_ready.go:49] node "no-preload-397724" has status "Ready":"True"
	I0829 20:32:16.834030   66841 node_ready.go:38] duration metric: took 16.322874ms for node "no-preload-397724" to be "Ready" ...
	I0829 20:32:16.834042   66841 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:32:16.843147   66841 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-crgtj" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:16.902589   66841 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 20:32:16.902613   66841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 20:32:16.902859   66841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 20:32:16.903193   66841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:32:16.922497   66841 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 20:32:16.922518   66841 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 20:32:16.966207   66841 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:32:16.966240   66841 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 20:32:17.004882   66841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:32:17.204576   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.204613   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.204968   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.204987   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:17.204995   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.204994   66841 main.go:141] libmachine: (no-preload-397724) DBG | Closing plugin on server side
	I0829 20:32:17.205002   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.205261   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.205278   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:17.211789   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.211811   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.212074   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.212089   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:17.212119   66841 main.go:141] libmachine: (no-preload-397724) DBG | Closing plugin on server side
	I0829 20:32:17.902866   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.902897   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.903218   66841 main.go:141] libmachine: (no-preload-397724) DBG | Closing plugin on server side
	I0829 20:32:17.903266   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.903278   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:17.903286   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.903296   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.903556   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.903572   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:18.344211   66841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.33928059s)
	I0829 20:32:18.344259   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:18.344274   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:18.344571   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:18.344589   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:18.344611   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:18.344626   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:18.344948   66841 main.go:141] libmachine: (no-preload-397724) DBG | Closing plugin on server side
	I0829 20:32:18.344980   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:18.345010   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:18.345025   66841 addons.go:475] Verifying addon metrics-server=true in "no-preload-397724"
	I0829 20:32:18.346919   66841 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0829 20:32:18.348704   66841 addons.go:510] duration metric: took 1.797503952s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0829 20:32:18.850832   66841 pod_ready.go:93] pod "coredns-6f6b679f8f-crgtj" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:18.850853   66841 pod_ready.go:82] duration metric: took 2.007683093s for pod "coredns-6f6b679f8f-crgtj" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:18.850862   66841 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dw2r7" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.357679   66841 pod_ready.go:93] pod "coredns-6f6b679f8f-dw2r7" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.357702   66841 pod_ready.go:82] duration metric: took 1.506832539s for pod "coredns-6f6b679f8f-dw2r7" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.357710   66841 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.361830   66841 pod_ready.go:93] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.361854   66841 pod_ready.go:82] duration metric: took 4.136801ms for pod "etcd-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.361865   66841 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.365719   66841 pod_ready.go:93] pod "kube-apiserver-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.365733   66841 pod_ready.go:82] duration metric: took 3.861894ms for pod "kube-apiserver-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.365741   66841 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.369596   66841 pod_ready.go:93] pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.369611   66841 pod_ready.go:82] duration metric: took 3.864669ms for pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.369619   66841 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f4x4j" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.447788   66841 pod_ready.go:93] pod "kube-proxy-f4x4j" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.447812   66841 pod_ready.go:82] duration metric: took 78.187574ms for pod "kube-proxy-f4x4j" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.447823   66841 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:22.049084   66841 pod_ready.go:93] pod "kube-scheduler-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:22.049105   66841 pod_ready.go:82] duration metric: took 1.601276793s for pod "kube-scheduler-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:22.049113   66841 pod_ready.go:39] duration metric: took 5.215058301s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:32:22.049125   66841 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:32:22.049172   66841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:32:22.066060   66841 api_server.go:72] duration metric: took 5.514888299s to wait for apiserver process to appear ...
	I0829 20:32:22.066086   66841 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:32:22.066109   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:32:22.072343   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 200:
	ok
	I0829 20:32:22.073798   66841 api_server.go:141] control plane version: v1.31.0
	I0829 20:32:22.073821   66841 api_server.go:131] duration metric: took 7.728095ms to wait for apiserver health ...
	I0829 20:32:22.073828   66841 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:32:22.252273   66841 system_pods.go:59] 9 kube-system pods found
	I0829 20:32:22.252302   66841 system_pods.go:61] "coredns-6f6b679f8f-crgtj" [c48571a8-18ae-4737-a05b-4a77736aee35] Running
	I0829 20:32:22.252309   66841 system_pods.go:61] "coredns-6f6b679f8f-dw2r7" [6edda799-e2d6-402b-b4cd-7e54b2b89ca5] Running
	I0829 20:32:22.252315   66841 system_pods.go:61] "etcd-no-preload-397724" [15473208-a76c-4bc5-810f-e78d59538493] Running
	I0829 20:32:22.252320   66841 system_pods.go:61] "kube-apiserver-no-preload-397724" [521c6041-888f-4145-aabb-54da7382953d] Running
	I0829 20:32:22.252325   66841 system_pods.go:61] "kube-controller-manager-no-preload-397724" [fd5afaf8-898d-4985-8efc-5628709a52cd] Running
	I0829 20:32:22.252329   66841 system_pods.go:61] "kube-proxy-f4x4j" [eb76dc5a-016a-416c-8880-f76fc2d2a9bb] Running
	I0829 20:32:22.252333   66841 system_pods.go:61] "kube-scheduler-no-preload-397724" [77d9e2de-ee8e-4cb2-a7f0-5d9b96bd9691] Running
	I0829 20:32:22.252342   66841 system_pods.go:61] "metrics-server-6867b74b74-nxdc5" [6061e81d-2f14-4c4a-9e0f-acb57dc9fb5a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:32:22.252348   66841 system_pods.go:61] "storage-provisioner" [8b6c02d6-7a39-4fea-80b4-4ba02904232c] Running
	I0829 20:32:22.252358   66841 system_pods.go:74] duration metric: took 178.523887ms to wait for pod list to return data ...
	I0829 20:32:22.252370   66841 default_sa.go:34] waiting for default service account to be created ...
	I0829 20:32:22.448475   66841 default_sa.go:45] found service account: "default"
	I0829 20:32:22.448499   66841 default_sa.go:55] duration metric: took 196.123693ms for default service account to be created ...
	I0829 20:32:22.448508   66841 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 20:32:22.650996   66841 system_pods.go:86] 9 kube-system pods found
	I0829 20:32:22.651023   66841 system_pods.go:89] "coredns-6f6b679f8f-crgtj" [c48571a8-18ae-4737-a05b-4a77736aee35] Running
	I0829 20:32:22.651029   66841 system_pods.go:89] "coredns-6f6b679f8f-dw2r7" [6edda799-e2d6-402b-b4cd-7e54b2b89ca5] Running
	I0829 20:32:22.651033   66841 system_pods.go:89] "etcd-no-preload-397724" [15473208-a76c-4bc5-810f-e78d59538493] Running
	I0829 20:32:22.651037   66841 system_pods.go:89] "kube-apiserver-no-preload-397724" [521c6041-888f-4145-aabb-54da7382953d] Running
	I0829 20:32:22.651042   66841 system_pods.go:89] "kube-controller-manager-no-preload-397724" [fd5afaf8-898d-4985-8efc-5628709a52cd] Running
	I0829 20:32:22.651045   66841 system_pods.go:89] "kube-proxy-f4x4j" [eb76dc5a-016a-416c-8880-f76fc2d2a9bb] Running
	I0829 20:32:22.651048   66841 system_pods.go:89] "kube-scheduler-no-preload-397724" [77d9e2de-ee8e-4cb2-a7f0-5d9b96bd9691] Running
	I0829 20:32:22.651054   66841 system_pods.go:89] "metrics-server-6867b74b74-nxdc5" [6061e81d-2f14-4c4a-9e0f-acb57dc9fb5a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:32:22.651058   66841 system_pods.go:89] "storage-provisioner" [8b6c02d6-7a39-4fea-80b4-4ba02904232c] Running
	I0829 20:32:22.651065   66841 system_pods.go:126] duration metric: took 202.552304ms to wait for k8s-apps to be running ...
	I0829 20:32:22.651071   66841 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 20:32:22.651111   66841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:32:22.666831   66841 system_svc.go:56] duration metric: took 15.753046ms WaitForService to wait for kubelet
	I0829 20:32:22.666863   66841 kubeadm.go:582] duration metric: took 6.115692499s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:32:22.666888   66841 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:32:22.848742   66841 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:32:22.848766   66841 node_conditions.go:123] node cpu capacity is 2
	I0829 20:32:22.848777   66841 node_conditions.go:105] duration metric: took 181.884368ms to run NodePressure ...
	I0829 20:32:22.848787   66841 start.go:241] waiting for startup goroutines ...
	I0829 20:32:22.848794   66841 start.go:246] waiting for cluster config update ...
	I0829 20:32:22.848803   66841 start.go:255] writing updated cluster config ...
	I0829 20:32:22.849030   66841 ssh_runner.go:195] Run: rm -f paused
	I0829 20:32:22.897503   66841 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 20:32:22.899404   66841 out.go:177] * Done! kubectl is now configured to use "no-preload-397724" cluster and "default" namespace by default
	I0829 20:32:29.924469   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:32:29.924707   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:32:29.924729   67607 kubeadm.go:310] 
	I0829 20:32:29.924801   67607 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 20:32:29.924855   67607 kubeadm.go:310] 		timed out waiting for the condition
	I0829 20:32:29.924865   67607 kubeadm.go:310] 
	I0829 20:32:29.924912   67607 kubeadm.go:310] 	This error is likely caused by:
	I0829 20:32:29.924960   67607 kubeadm.go:310] 		- The kubelet is not running
	I0829 20:32:29.925080   67607 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 20:32:29.925090   67607 kubeadm.go:310] 
	I0829 20:32:29.925207   67607 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 20:32:29.925256   67607 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 20:32:29.925316   67607 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 20:32:29.925342   67607 kubeadm.go:310] 
	I0829 20:32:29.925493   67607 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 20:32:29.925616   67607 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 20:32:29.925627   67607 kubeadm.go:310] 
	I0829 20:32:29.925776   67607 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 20:32:29.925909   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 20:32:29.926016   67607 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 20:32:29.926134   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 20:32:29.926154   67607 kubeadm.go:310] 
	I0829 20:32:29.926605   67607 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:32:29.926723   67607 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 20:32:29.926812   67607 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0829 20:32:29.926935   67607 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0829 20:32:29.926979   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:32:30.389951   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:32:30.408455   67607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:32:30.418493   67607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:32:30.418513   67607 kubeadm.go:157] found existing configuration files:
	
	I0829 20:32:30.418582   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:32:30.427909   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:32:30.427957   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:32:30.437122   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:32:30.446157   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:32:30.446203   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:32:30.455480   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:32:30.464781   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:32:30.464834   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:32:30.474607   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:32:30.484537   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:32:30.484601   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:32:30.494170   67607 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:32:30.717349   67607 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:34:26.784436   67607 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 20:34:26.784518   67607 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0829 20:34:26.786158   67607 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 20:34:26.786196   67607 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:34:26.786276   67607 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:34:26.786353   67607 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:34:26.786437   67607 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 20:34:26.786486   67607 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:34:26.788271   67607 out.go:235]   - Generating certificates and keys ...
	I0829 20:34:26.788380   67607 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:34:26.788453   67607 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:34:26.788523   67607 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:34:26.788593   67607 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:34:26.788665   67607 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:34:26.788714   67607 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:34:26.788769   67607 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:34:26.788826   67607 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:34:26.788894   67607 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:34:26.788961   67607 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:34:26.788993   67607 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:34:26.789044   67607 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:34:26.789084   67607 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:34:26.789143   67607 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:34:26.789228   67607 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:34:26.789312   67607 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:34:26.789441   67607 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:34:26.789577   67607 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:34:26.789647   67607 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:34:26.789717   67607 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:34:26.791166   67607 out.go:235]   - Booting up control plane ...
	I0829 20:34:26.791239   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:34:26.791305   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:34:26.791382   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:34:26.791465   67607 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:34:26.791597   67607 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 20:34:26.791658   67607 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 20:34:26.791736   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.791926   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792008   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.792182   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792254   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.792435   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792492   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.792725   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792798   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.793026   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.793043   67607 kubeadm.go:310] 
	I0829 20:34:26.793091   67607 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 20:34:26.793148   67607 kubeadm.go:310] 		timed out waiting for the condition
	I0829 20:34:26.793159   67607 kubeadm.go:310] 
	I0829 20:34:26.793188   67607 kubeadm.go:310] 	This error is likely caused by:
	I0829 20:34:26.793219   67607 kubeadm.go:310] 		- The kubelet is not running
	I0829 20:34:26.793305   67607 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 20:34:26.793314   67607 kubeadm.go:310] 
	I0829 20:34:26.793438   67607 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 20:34:26.793483   67607 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 20:34:26.793515   67607 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 20:34:26.793522   67607 kubeadm.go:310] 
	I0829 20:34:26.793618   67607 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 20:34:26.793735   67607 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 20:34:26.793748   67607 kubeadm.go:310] 
	I0829 20:34:26.793895   67607 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 20:34:26.794020   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 20:34:26.794125   67607 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 20:34:26.794227   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 20:34:26.794285   67607 kubeadm.go:310] 
	I0829 20:34:26.794300   67607 kubeadm.go:394] duration metric: took 7m57.183485424s to StartCluster
	I0829 20:34:26.794357   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:34:26.794410   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:34:26.837033   67607 cri.go:89] found id: ""
	I0829 20:34:26.837072   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.837083   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:34:26.837091   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:34:26.837153   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:34:26.871177   67607 cri.go:89] found id: ""
	I0829 20:34:26.871203   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.871213   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:34:26.871220   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:34:26.871280   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:34:26.905409   67607 cri.go:89] found id: ""
	I0829 20:34:26.905432   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.905442   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:34:26.905450   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:34:26.905509   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:34:26.940119   67607 cri.go:89] found id: ""
	I0829 20:34:26.940150   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.940161   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:34:26.940169   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:34:26.940217   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:34:26.974555   67607 cri.go:89] found id: ""
	I0829 20:34:26.974589   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.974601   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:34:26.974608   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:34:26.974674   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:34:27.010586   67607 cri.go:89] found id: ""
	I0829 20:34:27.010616   67607 logs.go:276] 0 containers: []
	W0829 20:34:27.010631   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:34:27.010639   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:34:27.010704   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:34:27.044867   67607 cri.go:89] found id: ""
	I0829 20:34:27.044900   67607 logs.go:276] 0 containers: []
	W0829 20:34:27.044913   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:34:27.044921   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:34:27.044979   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:34:27.079282   67607 cri.go:89] found id: ""
	I0829 20:34:27.079308   67607 logs.go:276] 0 containers: []
	W0829 20:34:27.079316   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:34:27.079323   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:34:27.079335   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:34:27.093455   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:34:27.093485   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:34:27.179256   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:34:27.179280   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:34:27.179292   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:34:27.305873   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:34:27.305906   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:34:27.349676   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:34:27.349702   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 20:34:27.399787   67607 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0829 20:34:27.399851   67607 out.go:270] * 
	W0829 20:34:27.399907   67607 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 20:34:27.399919   67607 out.go:270] * 
	W0829 20:34:27.400631   67607 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 20:34:27.403773   67607 out.go:201] 
	W0829 20:34:27.404902   67607 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 20:34:27.404953   67607 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0829 20:34:27.404981   67607 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0829 20:34:27.406310   67607 out.go:201] 
	
	
	==> CRI-O <==
	Aug 29 20:34:29 old-k8s-version-032002 crio[630]: time="2024-08-29 20:34:29.226110618Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724963669226086393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a48bb10-f55b-4aba-84a8-4647ad23bd4e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:34:29 old-k8s-version-032002 crio[630]: time="2024-08-29 20:34:29.226671294Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00d17435-f167-4211-8ab1-3f297e7723fe name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:34:29 old-k8s-version-032002 crio[630]: time="2024-08-29 20:34:29.226733353Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00d17435-f167-4211-8ab1-3f297e7723fe name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:34:29 old-k8s-version-032002 crio[630]: time="2024-08-29 20:34:29.226764878Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=00d17435-f167-4211-8ab1-3f297e7723fe name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:34:29 old-k8s-version-032002 crio[630]: time="2024-08-29 20:34:29.258507413Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=36d6f155-14b1-4b86-8869-ab032bc386df name=/runtime.v1.RuntimeService/Version
	Aug 29 20:34:29 old-k8s-version-032002 crio[630]: time="2024-08-29 20:34:29.258597769Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=36d6f155-14b1-4b86-8869-ab032bc386df name=/runtime.v1.RuntimeService/Version
	Aug 29 20:34:29 old-k8s-version-032002 crio[630]: time="2024-08-29 20:34:29.259655920Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ec2b1ac0-3b34-4f4e-9d5c-4c8c21282f80 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:34:29 old-k8s-version-032002 crio[630]: time="2024-08-29 20:34:29.260114833Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724963669260093646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ec2b1ac0-3b34-4f4e-9d5c-4c8c21282f80 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:34:29 old-k8s-version-032002 crio[630]: time="2024-08-29 20:34:29.261194564Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7a80811c-c64d-4928-9fe6-5db0d14e74c9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:34:29 old-k8s-version-032002 crio[630]: time="2024-08-29 20:34:29.261268690Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a80811c-c64d-4928-9fe6-5db0d14e74c9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:34:29 old-k8s-version-032002 crio[630]: time="2024-08-29 20:34:29.261301500Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7a80811c-c64d-4928-9fe6-5db0d14e74c9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:34:29 old-k8s-version-032002 crio[630]: time="2024-08-29 20:34:29.294229523Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0bc3c669-9537-4352-9ab0-1004b793b84e name=/runtime.v1.RuntimeService/Version
	Aug 29 20:34:29 old-k8s-version-032002 crio[630]: time="2024-08-29 20:34:29.294300061Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0bc3c669-9537-4352-9ab0-1004b793b84e name=/runtime.v1.RuntimeService/Version
	Aug 29 20:34:29 old-k8s-version-032002 crio[630]: time="2024-08-29 20:34:29.295195973Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ed42f543-bd00-4607-acdf-461205273abc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:34:29 old-k8s-version-032002 crio[630]: time="2024-08-29 20:34:29.295545846Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724963669295523896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed42f543-bd00-4607-acdf-461205273abc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:34:29 old-k8s-version-032002 crio[630]: time="2024-08-29 20:34:29.296060463Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26bb6933-8081-4103-939b-7abb64b35194 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:34:29 old-k8s-version-032002 crio[630]: time="2024-08-29 20:34:29.296129970Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26bb6933-8081-4103-939b-7abb64b35194 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:34:29 old-k8s-version-032002 crio[630]: time="2024-08-29 20:34:29.296163614Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=26bb6933-8081-4103-939b-7abb64b35194 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:34:29 old-k8s-version-032002 crio[630]: time="2024-08-29 20:34:29.329143530Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=39239b78-44fc-4401-9aea-eb64a1cf3bbe name=/runtime.v1.RuntimeService/Version
	Aug 29 20:34:29 old-k8s-version-032002 crio[630]: time="2024-08-29 20:34:29.329242093Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=39239b78-44fc-4401-9aea-eb64a1cf3bbe name=/runtime.v1.RuntimeService/Version
	Aug 29 20:34:29 old-k8s-version-032002 crio[630]: time="2024-08-29 20:34:29.330346311Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b5eebf1-a950-4c93-8e2f-1f68261f8a89 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:34:29 old-k8s-version-032002 crio[630]: time="2024-08-29 20:34:29.330705859Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724963669330684689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b5eebf1-a950-4c93-8e2f-1f68261f8a89 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:34:29 old-k8s-version-032002 crio[630]: time="2024-08-29 20:34:29.331398603Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d1623c8c-186b-4c47-9614-47f25d86596c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:34:29 old-k8s-version-032002 crio[630]: time="2024-08-29 20:34:29.331467433Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d1623c8c-186b-4c47-9614-47f25d86596c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:34:29 old-k8s-version-032002 crio[630]: time="2024-08-29 20:34:29.331503645Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d1623c8c-186b-4c47-9614-47f25d86596c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug29 20:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053894] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042317] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.920296] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.442854] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.576675] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.694150] systemd-fstab-generator[556]: Ignoring "noauto" option for root device
	[  +0.062526] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052165] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.177300] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.162237] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.253464] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +6.389299] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.063933] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.901932] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[ +13.592201] kauditd_printk_skb: 46 callbacks suppressed
	[Aug29 20:30] systemd-fstab-generator[5044]: Ignoring "noauto" option for root device
	[Aug29 20:32] systemd-fstab-generator[5320]: Ignoring "noauto" option for root device
	[  +0.064706] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:34:29 up 8 min,  0 users,  load average: 0.14, 0.15, 0.09
	Linux old-k8s-version-032002 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 29 20:34:26 old-k8s-version-032002 kubelet[5498]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Aug 29 20:34:26 old-k8s-version-032002 kubelet[5498]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000039da0, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000c23ec0, 0x24, 0x0, ...)
	Aug 29 20:34:26 old-k8s-version-032002 kubelet[5498]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Aug 29 20:34:26 old-k8s-version-032002 kubelet[5498]: net.(*Dialer).DialContext(0xc000124a20, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c23ec0, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 29 20:34:26 old-k8s-version-032002 kubelet[5498]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Aug 29 20:34:26 old-k8s-version-032002 kubelet[5498]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000449a60, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c23ec0, 0x24, 0x60, 0x7f7c84354988, 0x118, ...)
	Aug 29 20:34:26 old-k8s-version-032002 kubelet[5498]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Aug 29 20:34:26 old-k8s-version-032002 kubelet[5498]: net/http.(*Transport).dial(0xc0008d6000, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c23ec0, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 29 20:34:26 old-k8s-version-032002 kubelet[5498]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Aug 29 20:34:26 old-k8s-version-032002 kubelet[5498]: net/http.(*Transport).dialConn(0xc0008d6000, 0x4f7fe00, 0xc000120018, 0x0, 0xc000cf8d80, 0x5, 0xc000c23ec0, 0x24, 0x0, 0xc000c21d40, ...)
	Aug 29 20:34:26 old-k8s-version-032002 kubelet[5498]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Aug 29 20:34:26 old-k8s-version-032002 kubelet[5498]: net/http.(*Transport).dialConnFor(0xc0008d6000, 0xc0008fc840)
	Aug 29 20:34:26 old-k8s-version-032002 kubelet[5498]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Aug 29 20:34:26 old-k8s-version-032002 kubelet[5498]: created by net/http.(*Transport).queueForDial
	Aug 29 20:34:26 old-k8s-version-032002 kubelet[5498]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Aug 29 20:34:26 old-k8s-version-032002 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 29 20:34:26 old-k8s-version-032002 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 29 20:34:27 old-k8s-version-032002 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Aug 29 20:34:27 old-k8s-version-032002 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 29 20:34:27 old-k8s-version-032002 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 29 20:34:27 old-k8s-version-032002 kubelet[5553]: I0829 20:34:27.254883    5553 server.go:416] Version: v1.20.0
	Aug 29 20:34:27 old-k8s-version-032002 kubelet[5553]: I0829 20:34:27.255182    5553 server.go:837] Client rotation is on, will bootstrap in background
	Aug 29 20:34:27 old-k8s-version-032002 kubelet[5553]: I0829 20:34:27.257288    5553 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 29 20:34:27 old-k8s-version-032002 kubelet[5553]: W0829 20:34:27.258984    5553 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 29 20:34:27 old-k8s-version-032002 kubelet[5553]: I0829 20:34:27.259196    5553 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-032002 -n old-k8s-version-032002
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-032002 -n old-k8s-version-032002: exit status 2 (224.175752ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-032002" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (712.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-145096 -n default-k8s-diff-port-145096
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-145096 -n default-k8s-diff-port-145096: exit status 3 (3.167933617s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 20:24:07.362862   67957 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.140:22: connect: no route to host
	E0829 20:24:07.362879   67957 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.140:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-145096 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-145096 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152063229s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.140:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-145096 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-145096 -n default-k8s-diff-port-145096
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-145096 -n default-k8s-diff-port-145096: exit status 3 (3.063665096s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 20:24:16.578931   68038 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.140:22: connect: no route to host
	E0829 20:24:16.578950   68038 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.140:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-145096" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0829 20:31:37.943348   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-388383 -n embed-certs-388383
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-29 20:39:44.544137796 +0000 UTC m=+6248.714599043
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-388383 -n embed-certs-388383
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-388383 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-388383 logs -n 25: (2.003974722s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-397724                                   | no-preload-397724            | jenkins | v1.33.1 | 29 Aug 24 20:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-388383            | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:18 UTC | 29 Aug 24 20:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-388383                                  | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-695305             | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:19 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-695305                  | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-695305 --memory=2200 --alsologtostderr   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:20 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-695305 image list                           | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	| delete  | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	| start   | -p                                                     | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:21 UTC |
	|         | default-k8s-diff-port-145096                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-032002        | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-397724                  | no-preload-397724            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-397724                                   | no-preload-397724            | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC | 29 Aug 24 20:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-388383                 | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-388383                                  | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC | 29 Aug 24 20:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-145096  | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC | 29 Aug 24 20:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC |                     |
	|         | default-k8s-diff-port-145096                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-032002                              | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:22 UTC | 29 Aug 24 20:22 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-032002             | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:22 UTC | 29 Aug 24 20:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-032002                              | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:22 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-145096       | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:24 UTC | 29 Aug 24 20:31 UTC |
	|         | default-k8s-diff-port-145096                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 20:24:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 20:24:16.618808   68084 out.go:345] Setting OutFile to fd 1 ...
	I0829 20:24:16.619043   68084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:24:16.619051   68084 out.go:358] Setting ErrFile to fd 2...
	I0829 20:24:16.619055   68084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:24:16.619206   68084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 20:24:16.619741   68084 out.go:352] Setting JSON to false
	I0829 20:24:16.620649   68084 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7604,"bootTime":1724955453,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 20:24:16.620702   68084 start.go:139] virtualization: kvm guest
	I0829 20:24:16.622891   68084 out.go:177] * [default-k8s-diff-port-145096] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 20:24:16.624228   68084 out.go:177]   - MINIKUBE_LOCATION=19530
	I0829 20:24:16.624256   68084 notify.go:220] Checking for updates...
	I0829 20:24:16.627123   68084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 20:24:16.628611   68084 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:24:16.629858   68084 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 20:24:16.631013   68084 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 20:24:16.632116   68084 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 20:24:16.633630   68084 config.go:182] Loaded profile config "default-k8s-diff-port-145096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:24:16.634042   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:24:16.634080   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:24:16.648879   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36381
	I0829 20:24:16.649315   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:24:16.649875   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:24:16.649893   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:24:16.650274   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:24:16.650504   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:24:16.650776   68084 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 20:24:16.651053   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:24:16.651111   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:24:16.665964   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33615
	I0829 20:24:16.666402   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:24:16.666918   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:24:16.666937   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:24:16.667250   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:24:16.667435   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:24:16.698712   68084 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 20:24:16.700010   68084 start.go:297] selected driver: kvm2
	I0829 20:24:16.700023   68084 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-145096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:24:16.700131   68084 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 20:24:16.700915   68084 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 20:24:16.700998   68084 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19530-11185/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 20:24:16.715940   68084 install.go:137] /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0829 20:24:16.716321   68084 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:24:16.716388   68084 cni.go:84] Creating CNI manager for ""
	I0829 20:24:16.716405   68084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:24:16.716452   68084 start.go:340] cluster config:
	{Name:default-k8s-diff-port-145096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:24:16.716563   68084 iso.go:125] acquiring lock: {Name:mk1c9d3ac7f423dd4657884e37bdf4359f6328d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 20:24:16.718175   68084 out.go:177] * Starting "default-k8s-diff-port-145096" primary control-plane node in "default-k8s-diff-port-145096" cluster
	I0829 20:24:16.258820   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:16.719204   68084 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:24:16.719231   68084 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 20:24:16.719237   68084 cache.go:56] Caching tarball of preloaded images
	I0829 20:24:16.719296   68084 preload.go:172] Found /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 20:24:16.719305   68084 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 20:24:16.719385   68084 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/config.json ...
	I0829 20:24:16.719549   68084 start.go:360] acquireMachinesLock for default-k8s-diff-port-145096: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 20:24:22.338805   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:25.410778   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:31.490844   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:34.562885   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:40.642793   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:43.714939   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:49.794765   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:52.866858   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:58.946771   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:02.018832   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:08.098829   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:11.170833   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:17.250794   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:20.322926   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:26.402827   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:29.474844   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:35.554771   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:38.626850   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:41.630257   66989 start.go:364] duration metric: took 4m26.950412835s to acquireMachinesLock for "embed-certs-388383"
	I0829 20:25:41.630308   66989 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:25:41.630316   66989 fix.go:54] fixHost starting: 
	I0829 20:25:41.630791   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:25:41.630828   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:25:41.646005   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32873
	I0829 20:25:41.646405   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:25:41.646932   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:25:41.646959   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:25:41.647308   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:25:41.647525   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:25:41.647686   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:25:41.649457   66989 fix.go:112] recreateIfNeeded on embed-certs-388383: state=Stopped err=<nil>
	I0829 20:25:41.649491   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	W0829 20:25:41.649639   66989 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:25:41.651109   66989 out.go:177] * Restarting existing kvm2 VM for "embed-certs-388383" ...
	I0829 20:25:41.627651   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:25:41.627705   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:25:41.628067   66841 buildroot.go:166] provisioning hostname "no-preload-397724"
	I0829 20:25:41.628089   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:25:41.628259   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:25:41.630106   66841 machine.go:96] duration metric: took 4m35.46951337s to provisionDockerMachine
	I0829 20:25:41.630148   66841 fix.go:56] duration metric: took 4m35.494271139s for fixHost
	I0829 20:25:41.630159   66841 start.go:83] releasing machines lock for "no-preload-397724", held for 4m35.494325078s
	W0829 20:25:41.630182   66841 start.go:714] error starting host: provision: host is not running
	W0829 20:25:41.630284   66841 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0829 20:25:41.630295   66841 start.go:729] Will try again in 5 seconds ...
	I0829 20:25:41.652159   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Start
	I0829 20:25:41.652318   66989 main.go:141] libmachine: (embed-certs-388383) Ensuring networks are active...
	I0829 20:25:41.653011   66989 main.go:141] libmachine: (embed-certs-388383) Ensuring network default is active
	I0829 20:25:41.653426   66989 main.go:141] libmachine: (embed-certs-388383) Ensuring network mk-embed-certs-388383 is active
	I0829 20:25:41.653824   66989 main.go:141] libmachine: (embed-certs-388383) Getting domain xml...
	I0829 20:25:41.654765   66989 main.go:141] libmachine: (embed-certs-388383) Creating domain...
	I0829 20:25:42.860512   66989 main.go:141] libmachine: (embed-certs-388383) Waiting to get IP...
	I0829 20:25:42.861297   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:42.861661   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:42.861739   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:42.861649   68412 retry.go:31] will retry after 207.172422ms: waiting for machine to come up
	I0829 20:25:43.070026   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:43.070414   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:43.070445   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:43.070368   68412 retry.go:31] will retry after 336.815982ms: waiting for machine to come up
	I0829 20:25:43.408817   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:43.409144   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:43.409182   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:43.409117   68412 retry.go:31] will retry after 330.159156ms: waiting for machine to come up
	I0829 20:25:43.740518   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:43.741039   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:43.741065   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:43.741002   68412 retry.go:31] will retry after 528.906592ms: waiting for machine to come up
	I0829 20:25:44.271695   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:44.272286   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:44.272344   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:44.272280   68412 retry.go:31] will retry after 616.92568ms: waiting for machine to come up
	I0829 20:25:46.631383   66841 start.go:360] acquireMachinesLock for no-preload-397724: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 20:25:44.891133   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:44.891535   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:44.891566   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:44.891499   68412 retry.go:31] will retry after 907.330558ms: waiting for machine to come up
	I0829 20:25:45.800480   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:45.800858   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:45.800885   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:45.800840   68412 retry.go:31] will retry after 1.189775318s: waiting for machine to come up
	I0829 20:25:46.992687   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:46.993155   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:46.993189   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:46.993142   68412 retry.go:31] will retry after 1.467244635s: waiting for machine to come up
	I0829 20:25:48.462770   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:48.463201   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:48.463226   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:48.463173   68412 retry.go:31] will retry after 1.602764839s: waiting for machine to come up
	I0829 20:25:50.067082   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:50.067608   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:50.067638   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:50.067543   68412 retry.go:31] will retry after 1.562244323s: waiting for machine to come up
	I0829 20:25:51.632201   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:51.632705   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:51.632731   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:51.632650   68412 retry.go:31] will retry after 1.747220365s: waiting for machine to come up
	I0829 20:25:53.382010   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:53.382463   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:53.382527   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:53.382454   68412 retry.go:31] will retry after 3.446054845s: waiting for machine to come up
	I0829 20:25:56.830511   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:56.830954   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:56.830988   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:56.830908   68412 retry.go:31] will retry after 4.53995219s: waiting for machine to come up
	I0829 20:26:02.603329   67607 start.go:364] duration metric: took 3m23.680319578s to acquireMachinesLock for "old-k8s-version-032002"
	I0829 20:26:02.603393   67607 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:26:02.603404   67607 fix.go:54] fixHost starting: 
	I0829 20:26:02.603837   67607 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:02.603884   67607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:02.621398   67607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35977
	I0829 20:26:02.621840   67607 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:02.622425   67607 main.go:141] libmachine: Using API Version  1
	I0829 20:26:02.622460   67607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:02.622810   67607 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:02.623040   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:02.623201   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetState
	I0829 20:26:02.624854   67607 fix.go:112] recreateIfNeeded on old-k8s-version-032002: state=Stopped err=<nil>
	I0829 20:26:02.624880   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	W0829 20:26:02.625020   67607 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:26:02.627161   67607 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-032002" ...
	I0829 20:26:02.628419   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .Start
	I0829 20:26:02.628578   67607 main.go:141] libmachine: (old-k8s-version-032002) Ensuring networks are active...
	I0829 20:26:02.629339   67607 main.go:141] libmachine: (old-k8s-version-032002) Ensuring network default is active
	I0829 20:26:02.629732   67607 main.go:141] libmachine: (old-k8s-version-032002) Ensuring network mk-old-k8s-version-032002 is active
	I0829 20:26:02.630188   67607 main.go:141] libmachine: (old-k8s-version-032002) Getting domain xml...
	I0829 20:26:02.630924   67607 main.go:141] libmachine: (old-k8s-version-032002) Creating domain...
	I0829 20:26:01.375542   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.375928   66989 main.go:141] libmachine: (embed-certs-388383) Found IP for machine: 192.168.61.202
	I0829 20:26:01.375951   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has current primary IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.375974   66989 main.go:141] libmachine: (embed-certs-388383) Reserving static IP address...
	I0829 20:26:01.376364   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "embed-certs-388383", mac: "52:54:00:6c:5a:0c", ip: "192.168.61.202"} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.376398   66989 main.go:141] libmachine: (embed-certs-388383) DBG | skip adding static IP to network mk-embed-certs-388383 - found existing host DHCP lease matching {name: "embed-certs-388383", mac: "52:54:00:6c:5a:0c", ip: "192.168.61.202"}
	I0829 20:26:01.376411   66989 main.go:141] libmachine: (embed-certs-388383) Reserved static IP address: 192.168.61.202
	I0829 20:26:01.376428   66989 main.go:141] libmachine: (embed-certs-388383) Waiting for SSH to be available...
	I0829 20:26:01.376445   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Getting to WaitForSSH function...
	I0829 20:26:01.378600   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.378899   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.378937   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.379065   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Using SSH client type: external
	I0829 20:26:01.379088   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa (-rw-------)
	I0829 20:26:01.379118   66989 main.go:141] libmachine: (embed-certs-388383) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:01.379132   66989 main.go:141] libmachine: (embed-certs-388383) DBG | About to run SSH command:
	I0829 20:26:01.379141   66989 main.go:141] libmachine: (embed-certs-388383) DBG | exit 0
	I0829 20:26:01.498736   66989 main.go:141] libmachine: (embed-certs-388383) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:01.499103   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetConfigRaw
	I0829 20:26:01.499700   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetIP
	I0829 20:26:01.502022   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.502332   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.502362   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.502586   66989 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/config.json ...
	I0829 20:26:01.502778   66989 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:01.502795   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:01.502980   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.505156   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.505452   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.505473   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.505590   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:01.505739   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.505902   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.506038   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:01.506183   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:01.506366   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:01.506376   66989 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:01.602691   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:01.602721   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetMachineName
	I0829 20:26:01.603002   66989 buildroot.go:166] provisioning hostname "embed-certs-388383"
	I0829 20:26:01.603033   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetMachineName
	I0829 20:26:01.603232   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.605841   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.606170   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.606201   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.606333   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:01.606505   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.606672   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.606786   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:01.606950   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:01.607121   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:01.607144   66989 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-388383 && echo "embed-certs-388383" | sudo tee /etc/hostname
	I0829 20:26:01.717669   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-388383
	
	I0829 20:26:01.717709   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.720400   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.720705   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.720733   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.720863   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:01.721097   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.721280   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.721446   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:01.721585   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:01.721811   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:01.721842   66989 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-388383' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-388383/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-388383' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:01.827800   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:01.827835   66989 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:01.827869   66989 buildroot.go:174] setting up certificates
	I0829 20:26:01.827882   66989 provision.go:84] configureAuth start
	I0829 20:26:01.827894   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetMachineName
	I0829 20:26:01.828214   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetIP
	I0829 20:26:01.830619   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.831150   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.831184   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.831339   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.833642   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.833961   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.833987   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.834161   66989 provision.go:143] copyHostCerts
	I0829 20:26:01.834217   66989 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:01.834241   66989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:01.834322   66989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:01.834445   66989 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:01.834457   66989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:01.834491   66989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:01.834608   66989 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:01.834621   66989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:01.834660   66989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:01.834726   66989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.embed-certs-388383 san=[127.0.0.1 192.168.61.202 embed-certs-388383 localhost minikube]
	I0829 20:26:01.992735   66989 provision.go:177] copyRemoteCerts
	I0829 20:26:01.992794   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:01.992819   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.995463   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.995835   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.995862   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.996006   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:01.996179   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.996333   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:01.996460   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:02.077017   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:02.105498   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0829 20:26:02.133974   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 20:26:02.161330   66989 provision.go:87] duration metric: took 333.435119ms to configureAuth
	I0829 20:26:02.161362   66989 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:02.161579   66989 config.go:182] Loaded profile config "embed-certs-388383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:26:02.161707   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.164373   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.164696   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.164724   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.164909   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.165111   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.165276   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.165402   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.165535   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:02.165697   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:02.165711   66989 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:02.377994   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:02.378022   66989 machine.go:96] duration metric: took 875.231112ms to provisionDockerMachine
	I0829 20:26:02.378037   66989 start.go:293] postStartSetup for "embed-certs-388383" (driver="kvm2")
	I0829 20:26:02.378053   66989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:02.378078   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.378404   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:02.378432   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.380920   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.381329   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.381358   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.381564   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.381797   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.381975   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.382124   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:02.461053   66989 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:02.465391   66989 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:02.465417   66989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:02.465479   66989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:02.465550   66989 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:02.465635   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:02.474909   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:02.500025   66989 start.go:296] duration metric: took 121.973853ms for postStartSetup
	I0829 20:26:02.500064   66989 fix.go:56] duration metric: took 20.86974885s for fixHost
	I0829 20:26:02.500082   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.502976   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.503380   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.503411   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.503599   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.503808   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.503976   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.504126   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.504283   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:02.504459   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:02.504469   66989 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:02.603161   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963162.568310162
	
	I0829 20:26:02.603181   66989 fix.go:216] guest clock: 1724963162.568310162
	I0829 20:26:02.603187   66989 fix.go:229] Guest: 2024-08-29 20:26:02.568310162 +0000 UTC Remote: 2024-08-29 20:26:02.500067292 +0000 UTC m=+288.185978445 (delta=68.24287ms)
	I0829 20:26:02.603210   66989 fix.go:200] guest clock delta is within tolerance: 68.24287ms
	I0829 20:26:02.603216   66989 start.go:83] releasing machines lock for "embed-certs-388383", held for 20.972921408s
	I0829 20:26:02.603248   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.603532   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetIP
	I0829 20:26:02.606426   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.606804   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.606834   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.607021   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.607527   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.607694   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.607770   66989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:02.607809   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.607878   66989 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:02.607896   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.610239   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.610264   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.610657   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.610685   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.610723   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.610742   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.610844   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.611014   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.611014   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.611145   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.611208   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.611268   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.611341   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:02.611399   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:02.712435   66989 ssh_runner.go:195] Run: systemctl --version
	I0829 20:26:02.718614   66989 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:26:02.865138   66989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:26:02.871510   66989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:26:02.871593   66989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:26:02.887316   66989 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:26:02.887340   66989 start.go:495] detecting cgroup driver to use...
	I0829 20:26:02.887394   66989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:26:02.905024   66989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:26:02.918922   66989 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:26:02.918986   66989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:26:02.932660   66989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:26:02.946679   66989 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:26:03.056273   66989 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:26:03.216885   66989 docker.go:233] disabling docker service ...
	I0829 20:26:03.216959   66989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:26:03.231363   66989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:26:03.245609   66989 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:26:03.368087   66989 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:26:03.493947   66989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:26:03.508803   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:26:03.527542   66989 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 20:26:03.527607   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.538301   66989 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:26:03.538370   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.549672   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.562203   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.573572   66989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:26:03.585031   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.596778   66989 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.619405   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.630337   66989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:26:03.640492   66989 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:26:03.640568   66989 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:26:03.657931   66989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:26:03.673756   66989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:03.792856   66989 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:26:03.880493   66989 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:26:03.880551   66989 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:26:03.885793   66989 start.go:563] Will wait 60s for crictl version
	I0829 20:26:03.885850   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:26:03.889835   66989 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:26:03.928633   66989 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:26:03.928702   66989 ssh_runner.go:195] Run: crio --version
	I0829 20:26:03.958861   66989 ssh_runner.go:195] Run: crio --version
	I0829 20:26:03.987724   66989 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 20:26:03.989009   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetIP
	I0829 20:26:03.991889   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:03.992308   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:03.992334   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:03.992567   66989 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0829 20:26:03.996945   66989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:04.009353   66989 kubeadm.go:883] updating cluster {Name:embed-certs-388383 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-388383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:26:04.009462   66989 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:26:04.009501   66989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:04.051583   66989 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 20:26:04.051643   66989 ssh_runner.go:195] Run: which lz4
	I0829 20:26:04.055929   66989 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 20:26:04.060214   66989 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 20:26:04.060240   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 20:26:03.867691   67607 main.go:141] libmachine: (old-k8s-version-032002) Waiting to get IP...
	I0829 20:26:03.868798   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:03.869246   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:03.869318   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:03.869235   68552 retry.go:31] will retry after 220.928648ms: waiting for machine to come up
	I0829 20:26:04.091675   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:04.092057   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:04.092084   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:04.092020   68552 retry.go:31] will retry after 352.781755ms: waiting for machine to come up
	I0829 20:26:04.446766   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:04.447277   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:04.447301   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:04.447224   68552 retry.go:31] will retry after 480.96031ms: waiting for machine to come up
	I0829 20:26:04.929561   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:04.930149   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:04.930181   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:04.930051   68552 retry.go:31] will retry after 415.057247ms: waiting for machine to come up
	I0829 20:26:05.346757   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:05.347224   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:05.347258   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:05.347196   68552 retry.go:31] will retry after 609.958508ms: waiting for machine to come up
	I0829 20:26:05.959227   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:05.959774   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:05.959825   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:05.959702   68552 retry.go:31] will retry after 680.801337ms: waiting for machine to come up
	I0829 20:26:06.642811   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:06.643312   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:06.643343   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:06.643269   68552 retry.go:31] will retry after 995.561322ms: waiting for machine to come up
	I0829 20:26:07.640147   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:07.640617   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:07.640652   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:07.640588   68552 retry.go:31] will retry after 1.22436043s: waiting for machine to come up
	I0829 20:26:05.472272   66989 crio.go:462] duration metric: took 1.416373513s to copy over tarball
	I0829 20:26:05.472355   66989 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 20:26:07.583560   66989 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.111164398s)
	I0829 20:26:07.583595   66989 crio.go:469] duration metric: took 2.111297179s to extract the tarball
	I0829 20:26:07.583605   66989 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 20:26:07.622447   66989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:07.671704   66989 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 20:26:07.671732   66989 cache_images.go:84] Images are preloaded, skipping loading
	I0829 20:26:07.671742   66989 kubeadm.go:934] updating node { 192.168.61.202 8443 v1.31.0 crio true true} ...
	I0829 20:26:07.671869   66989 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-388383 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-388383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:26:07.671958   66989 ssh_runner.go:195] Run: crio config
	I0829 20:26:07.717217   66989 cni.go:84] Creating CNI manager for ""
	I0829 20:26:07.717242   66989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:07.717263   66989 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:26:07.717290   66989 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.202 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-388383 NodeName:embed-certs-388383 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 20:26:07.717465   66989 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-388383"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:26:07.717549   66989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 20:26:07.727174   66989 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:26:07.727258   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:26:07.736512   66989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0829 20:26:07.752727   66989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:26:07.772430   66989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0829 20:26:07.793343   66989 ssh_runner.go:195] Run: grep 192.168.61.202	control-plane.minikube.internal$ /etc/hosts
	I0829 20:26:07.798214   66989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:07.811285   66989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:07.927025   66989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:07.943741   66989 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383 for IP: 192.168.61.202
	I0829 20:26:07.943765   66989 certs.go:194] generating shared ca certs ...
	I0829 20:26:07.943784   66989 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:07.943984   66989 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:26:07.944047   66989 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:26:07.944061   66989 certs.go:256] generating profile certs ...
	I0829 20:26:07.944177   66989 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/client.key
	I0829 20:26:07.944254   66989 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/apiserver.key.03b29390
	I0829 20:26:07.944317   66989 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/proxy-client.key
	I0829 20:26:07.944494   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:26:07.944538   66989 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:26:07.944551   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:26:07.944581   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:26:07.944605   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:26:07.944628   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:26:07.944670   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:07.945252   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:26:07.971277   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:26:08.012892   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:26:08.042038   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:26:08.067708   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0829 20:26:08.095930   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 20:26:08.127171   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:26:08.151287   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 20:26:08.175525   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:26:08.199076   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:26:08.222783   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:26:08.245783   66989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:26:08.261839   66989 ssh_runner.go:195] Run: openssl version
	I0829 20:26:08.267545   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:26:08.278347   66989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:26:08.284232   66989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:26:08.284283   66989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:26:08.292024   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:26:08.306831   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:26:08.320607   66989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:26:08.325027   66989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:26:08.325070   66989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:26:08.330808   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:26:08.341457   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:26:08.352323   66989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:08.356822   66989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:08.356891   66989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:08.362617   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:26:08.373755   66989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:26:08.378153   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:26:08.384225   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:26:08.390136   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:26:08.396002   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:26:08.401713   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:26:08.407437   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:26:08.413033   66989 kubeadm.go:392] StartCluster: {Name:embed-certs-388383 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-388383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:26:08.413119   66989 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:26:08.413173   66989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:08.450685   66989 cri.go:89] found id: ""
	I0829 20:26:08.450757   66989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:26:08.460787   66989 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:26:08.460809   66989 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:26:08.460853   66989 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:26:08.470179   66989 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:26:08.471673   66989 kubeconfig.go:125] found "embed-certs-388383" server: "https://192.168.61.202:8443"
	I0829 20:26:08.474839   66989 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:26:08.483951   66989 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.202
	I0829 20:26:08.483992   66989 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:26:08.484007   66989 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:26:08.484085   66989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:08.525947   66989 cri.go:89] found id: ""
	I0829 20:26:08.526013   66989 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:26:08.541862   66989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:26:08.551179   66989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:26:08.551200   66989 kubeadm.go:157] found existing configuration files:
	
	I0829 20:26:08.551249   66989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:26:08.559897   66989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:26:08.559970   66989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:26:08.569317   66989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:26:08.577858   66989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:26:08.577905   66989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:26:08.587113   66989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:26:08.595645   66989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:26:08.595705   66989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:26:08.604803   66989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:26:08.613070   66989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:26:08.613125   66989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:26:08.622037   66989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:26:08.631330   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:08.742682   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:08.866518   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:08.866954   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:08.866985   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:08.866896   68552 retry.go:31] will retry after 1.707701085s: waiting for machine to come up
	I0829 20:26:10.576676   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:10.577094   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:10.577124   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:10.577047   68552 retry.go:31] will retry after 1.496799212s: waiting for machine to come up
	I0829 20:26:12.075964   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:12.076412   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:12.076451   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:12.076377   68552 retry.go:31] will retry after 2.246779697s: waiting for machine to come up
	I0829 20:26:09.809078   66989 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.066360218s)
	I0829 20:26:09.809118   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:10.027517   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:10.095959   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:10.199656   66989 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:26:10.199745   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:10.700569   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:11.200798   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:11.700664   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:12.200052   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:12.700839   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:12.715319   66989 api_server.go:72] duration metric: took 2.515661322s to wait for apiserver process to appear ...
	I0829 20:26:12.715351   66989 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:26:12.715374   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:15.687527   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:26:15.687558   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:26:15.687572   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:15.716339   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:26:15.716365   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:26:15.716378   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:15.750700   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:15.750732   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:16.216255   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:16.224376   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:16.224401   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:16.715457   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:16.723983   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:16.724004   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:17.215562   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:17.219605   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0829 20:26:17.225473   66989 api_server.go:141] control plane version: v1.31.0
	I0829 20:26:17.225496   66989 api_server.go:131] duration metric: took 4.510137186s to wait for apiserver health ...
	I0829 20:26:17.225504   66989 cni.go:84] Creating CNI manager for ""
	I0829 20:26:17.225509   66989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:17.227379   66989 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:26:14.324452   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:14.324770   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:14.324808   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:14.324748   68552 retry.go:31] will retry after 3.172592587s: waiting for machine to come up
	I0829 20:26:17.500203   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:17.500540   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:17.500573   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:17.500485   68552 retry.go:31] will retry after 2.81386002s: waiting for machine to come up
	I0829 20:26:17.228505   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:26:17.238762   66989 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:26:17.264380   66989 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:26:17.274981   66989 system_pods.go:59] 8 kube-system pods found
	I0829 20:26:17.275009   66989 system_pods.go:61] "coredns-6f6b679f8f-dg6t6" [92e89b20-ebf4-4738-8ca7-9dc2a0e5653a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:26:17.275016   66989 system_pods.go:61] "etcd-embed-certs-388383" [a688325a-9ed2-488d-a1a1-aa440e37fa9f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 20:26:17.275023   66989 system_pods.go:61] "kube-apiserver-embed-certs-388383" [7a1b715b-87a3-44e0-868d-a3184f5b9f61] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 20:26:17.275028   66989 system_pods.go:61] "kube-controller-manager-embed-certs-388383" [9d942083-4d39-448c-8151-424ea9d5e6af] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 20:26:17.275033   66989 system_pods.go:61] "kube-proxy-fcxs4" [649b40c8-4f4b-40d1-8179-baf378d4c7d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 20:26:17.275038   66989 system_pods.go:61] "kube-scheduler-embed-certs-388383" [87b73013-dfad-411d-aaa9-f2c0e39fb920] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 20:26:17.275043   66989 system_pods.go:61] "metrics-server-6867b74b74-mx5jh" [99e21acd-b7b8-4e6f-8c75-c112206aed89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:26:17.275048   66989 system_pods.go:61] "storage-provisioner" [021ca156-b7a8-4647-8efe-db17968fd5a8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 20:26:17.275056   66989 system_pods.go:74] duration metric: took 10.656426ms to wait for pod list to return data ...
	I0829 20:26:17.275074   66989 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:26:17.279480   66989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:26:17.279504   66989 node_conditions.go:123] node cpu capacity is 2
	I0829 20:26:17.279519   66989 node_conditions.go:105] duration metric: took 4.439469ms to run NodePressure ...
	I0829 20:26:17.279537   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:17.561282   66989 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 20:26:17.565287   66989 kubeadm.go:739] kubelet initialised
	I0829 20:26:17.565307   66989 kubeadm.go:740] duration metric: took 4.002605ms waiting for restarted kubelet to initialise ...
	I0829 20:26:17.565314   66989 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:26:17.570104   66989 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:17.576425   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.576454   66989 pod_ready.go:82] duration metric: took 6.324083ms for pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:17.576464   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.576474   66989 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:17.582501   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "etcd-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.582523   66989 pod_ready.go:82] duration metric: took 6.040325ms for pod "etcd-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:17.582547   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "etcd-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.582556   66989 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:17.588534   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.588554   66989 pod_ready.go:82] duration metric: took 5.988678ms for pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:17.588562   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.588568   66989 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:17.668334   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.668365   66989 pod_ready.go:82] duration metric: took 79.787211ms for pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:17.668378   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.668386   66989 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fcxs4" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:18.068248   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "kube-proxy-fcxs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.068286   66989 pod_ready.go:82] duration metric: took 399.880238ms for pod "kube-proxy-fcxs4" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:18.068299   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "kube-proxy-fcxs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.068308   66989 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:18.468096   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.468126   66989 pod_ready.go:82] duration metric: took 399.810823ms for pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:18.468134   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.468141   66989 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:18.868444   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.868478   66989 pod_ready.go:82] duration metric: took 400.329102ms for pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:18.868490   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.868499   66989 pod_ready.go:39] duration metric: took 1.303176044s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:26:18.868519   66989 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 20:26:18.880892   66989 ops.go:34] apiserver oom_adj: -16
	I0829 20:26:18.880916   66989 kubeadm.go:597] duration metric: took 10.42010114s to restartPrimaryControlPlane
	I0829 20:26:18.880925   66989 kubeadm.go:394] duration metric: took 10.467899141s to StartCluster
	I0829 20:26:18.880946   66989 settings.go:142] acquiring lock: {Name:mka4cd5ddff5796cd0ca11509c181178f4f73529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:18.881032   66989 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:26:18.884130   66989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:18.884619   66989 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 20:26:18.884674   66989 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 20:26:18.884749   66989 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-388383"
	I0829 20:26:18.884765   66989 addons.go:69] Setting default-storageclass=true in profile "embed-certs-388383"
	I0829 20:26:18.884783   66989 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-388383"
	W0829 20:26:18.884792   66989 addons.go:243] addon storage-provisioner should already be in state true
	I0829 20:26:18.884804   66989 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-388383"
	I0829 20:26:18.884816   66989 addons.go:69] Setting metrics-server=true in profile "embed-certs-388383"
	I0829 20:26:18.884828   66989 host.go:66] Checking if "embed-certs-388383" exists ...
	I0829 20:26:18.884856   66989 addons.go:234] Setting addon metrics-server=true in "embed-certs-388383"
	W0829 20:26:18.884877   66989 addons.go:243] addon metrics-server should already be in state true
	I0829 20:26:18.884884   66989 config.go:182] Loaded profile config "embed-certs-388383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:26:18.884912   66989 host.go:66] Checking if "embed-certs-388383" exists ...
	I0829 20:26:18.885134   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.885176   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.885216   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.885249   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.885291   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.885338   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.886484   66989 out.go:177] * Verifying Kubernetes components...
	I0829 20:26:18.887938   66989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:18.900910   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33641
	I0829 20:26:18.901377   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.901917   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.901938   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.902300   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.903062   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.903110   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.903810   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I0829 20:26:18.903824   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38101
	I0829 20:26:18.904282   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.904303   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.904673   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.904691   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.904829   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.904845   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.905017   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.905428   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.905462   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.905664   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.905860   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:26:18.909388   66989 addons.go:234] Setting addon default-storageclass=true in "embed-certs-388383"
	W0829 20:26:18.909408   66989 addons.go:243] addon default-storageclass should already be in state true
	I0829 20:26:18.909437   66989 host.go:66] Checking if "embed-certs-388383" exists ...
	I0829 20:26:18.909793   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.909839   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.921180   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35467
	I0829 20:26:18.921597   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.922074   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.922087   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.922470   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.922697   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:26:18.922725   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39123
	I0829 20:26:18.923052   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.923592   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.923610   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.923919   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.924057   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:26:18.924063   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45681
	I0829 20:26:18.924461   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.924519   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:18.924984   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.925002   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.925632   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.925682   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:18.926152   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.926194   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.926494   66989 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:18.927266   66989 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 20:26:18.928130   66989 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:26:18.928141   66989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 20:26:18.928155   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:18.928843   66989 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 20:26:18.928863   66989 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 20:26:18.928888   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:18.931716   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.932273   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:18.932296   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.932424   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.932456   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:18.932644   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:18.932810   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:18.932869   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:18.932891   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.933050   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:18.933100   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:18.933271   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:18.933426   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:18.933598   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:18.942718   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38109
	I0829 20:26:18.943150   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.943532   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.943553   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.943908   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.944027   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:26:18.945304   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:18.945498   66989 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 20:26:18.945510   66989 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 20:26:18.945522   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:18.948108   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.948469   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:18.948494   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.948730   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:18.948889   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:18.949085   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:18.949222   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:19.111953   66989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:19.131195   66989 node_ready.go:35] waiting up to 6m0s for node "embed-certs-388383" to be "Ready" ...
	I0829 20:26:19.246857   66989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:26:19.269511   66989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 20:26:19.269670   66989 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 20:26:19.269691   66989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 20:26:19.346200   66989 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 20:26:19.346234   66989 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 20:26:19.374530   66989 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:26:19.374566   66989 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 20:26:19.418474   66989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:26:20.495022   66989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.225476769s)
	I0829 20:26:20.495077   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.495090   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.495185   66989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.248286753s)
	I0829 20:26:20.495232   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.495249   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.495572   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.495600   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.495611   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.495619   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.495634   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.495663   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Closing plugin on server side
	I0829 20:26:20.495664   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.495678   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.495688   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.496014   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.496029   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.496061   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Closing plugin on server side
	I0829 20:26:20.496097   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.496111   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.504149   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.504182   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.504419   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.504436   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.519341   66989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.100829284s)
	I0829 20:26:20.519396   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.519422   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.519670   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Closing plugin on server side
	I0829 20:26:20.519716   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.519734   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.519746   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.519755   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.520040   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.520055   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.520072   66989 addons.go:475] Verifying addon metrics-server=true in "embed-certs-388383"
	I0829 20:26:20.523102   66989 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0829 20:26:21.515365   68084 start.go:364] duration metric: took 2m4.795762476s to acquireMachinesLock for "default-k8s-diff-port-145096"
	I0829 20:26:21.515428   68084 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:26:21.515439   68084 fix.go:54] fixHost starting: 
	I0829 20:26:21.515864   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:21.515904   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:21.535441   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33171
	I0829 20:26:21.535886   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:21.536390   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:26:21.536414   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:21.536819   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:21.537035   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:21.537203   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:26:21.538735   68084 fix.go:112] recreateIfNeeded on default-k8s-diff-port-145096: state=Stopped err=<nil>
	I0829 20:26:21.538762   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	W0829 20:26:21.538901   68084 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:26:21.540852   68084 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-145096" ...
	I0829 20:26:21.542258   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Start
	I0829 20:26:21.542429   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Ensuring networks are active...
	I0829 20:26:21.543181   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Ensuring network default is active
	I0829 20:26:21.543522   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Ensuring network mk-default-k8s-diff-port-145096 is active
	I0829 20:26:21.543872   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Getting domain xml...
	I0829 20:26:21.544627   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Creating domain...
	I0829 20:26:20.317138   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.317672   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has current primary IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.317700   67607 main.go:141] libmachine: (old-k8s-version-032002) Found IP for machine: 192.168.39.116
	I0829 20:26:20.317716   67607 main.go:141] libmachine: (old-k8s-version-032002) Reserving static IP address...
	I0829 20:26:20.318143   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "old-k8s-version-032002", mac: "52:54:00:a8:ca:96", ip: "192.168.39.116"} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.318169   67607 main.go:141] libmachine: (old-k8s-version-032002) Reserved static IP address: 192.168.39.116
	I0829 20:26:20.318189   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | skip adding static IP to network mk-old-k8s-version-032002 - found existing host DHCP lease matching {name: "old-k8s-version-032002", mac: "52:54:00:a8:ca:96", ip: "192.168.39.116"}
	I0829 20:26:20.318208   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | Getting to WaitForSSH function...
	I0829 20:26:20.318217   67607 main.go:141] libmachine: (old-k8s-version-032002) Waiting for SSH to be available...
	I0829 20:26:20.320598   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.320961   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.320989   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.321082   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | Using SSH client type: external
	I0829 20:26:20.321121   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa (-rw-------)
	I0829 20:26:20.321156   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:20.321171   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | About to run SSH command:
	I0829 20:26:20.321185   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | exit 0
	I0829 20:26:20.446805   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:20.447204   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetConfigRaw
	I0829 20:26:20.447944   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:20.450726   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.451120   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.451160   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.451464   67607 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/config.json ...
	I0829 20:26:20.451670   67607 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:20.451690   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:20.451886   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.454120   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.454496   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.454566   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.454648   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.454808   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.454975   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.455123   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.455282   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:20.455520   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:20.455533   67607 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:20.555074   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:20.555100   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetMachineName
	I0829 20:26:20.555331   67607 buildroot.go:166] provisioning hostname "old-k8s-version-032002"
	I0829 20:26:20.555353   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetMachineName
	I0829 20:26:20.555540   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.558576   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.559058   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.559086   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.559273   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.559490   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.559661   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.559834   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.560026   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:20.560189   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:20.560201   67607 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-032002 && echo "old-k8s-version-032002" | sudo tee /etc/hostname
	I0829 20:26:20.675352   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-032002
	
	I0829 20:26:20.675400   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.678472   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.678908   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.678944   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.679139   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.679341   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.679533   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.679710   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.679884   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:20.680090   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:20.680108   67607 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-032002' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-032002/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-032002' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:20.789673   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:20.789713   67607 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:20.789744   67607 buildroot.go:174] setting up certificates
	I0829 20:26:20.789753   67607 provision.go:84] configureAuth start
	I0829 20:26:20.789761   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetMachineName
	I0829 20:26:20.790067   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:20.792822   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.793152   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.793173   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.793338   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.795624   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.795948   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.795974   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.796080   67607 provision.go:143] copyHostCerts
	I0829 20:26:20.796148   67607 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:20.796168   67607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:20.796236   67607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:20.796344   67607 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:20.796355   67607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:20.796387   67607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:20.796467   67607 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:20.796476   67607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:20.796503   67607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:20.796573   67607 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-032002 san=[127.0.0.1 192.168.39.116 localhost minikube old-k8s-version-032002]
	I0829 20:26:20.906382   67607 provision.go:177] copyRemoteCerts
	I0829 20:26:20.906436   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:20.906466   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.909180   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.909488   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.909519   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.909666   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.909831   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.909963   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.910062   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:20.989017   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:21.018571   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0829 20:26:21.043015   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 20:26:21.067288   67607 provision.go:87] duration metric: took 277.522292ms to configureAuth
	I0829 20:26:21.067322   67607 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:21.067527   67607 config.go:182] Loaded profile config "old-k8s-version-032002": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0829 20:26:21.067607   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.070264   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.070642   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.070679   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.070881   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.071088   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.071288   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.071465   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.071661   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:21.071886   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:21.071923   67607 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:21.290979   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:21.291003   67607 machine.go:96] duration metric: took 839.319831ms to provisionDockerMachine
	I0829 20:26:21.291014   67607 start.go:293] postStartSetup for "old-k8s-version-032002" (driver="kvm2")
	I0829 20:26:21.291026   67607 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:21.291046   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.291342   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:21.291366   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.293946   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.294245   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.294273   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.294464   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.294686   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.294840   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.294964   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:21.373592   67607 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:21.377797   67607 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:21.377826   67607 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:21.377892   67607 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:21.377966   67607 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:21.378054   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:21.387886   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:21.413456   67607 start.go:296] duration metric: took 122.429334ms for postStartSetup
	I0829 20:26:21.413497   67607 fix.go:56] duration metric: took 18.810093949s for fixHost
	I0829 20:26:21.413522   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.416095   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.416391   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.416418   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.416594   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.416803   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.416970   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.417115   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.417272   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:21.417474   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:21.417489   67607 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:21.515167   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963181.486447470
	
	I0829 20:26:21.515190   67607 fix.go:216] guest clock: 1724963181.486447470
	I0829 20:26:21.515200   67607 fix.go:229] Guest: 2024-08-29 20:26:21.48644747 +0000 UTC Remote: 2024-08-29 20:26:21.413502498 +0000 UTC m=+222.629982255 (delta=72.944972ms)
	I0829 20:26:21.515225   67607 fix.go:200] guest clock delta is within tolerance: 72.944972ms
	I0829 20:26:21.515232   67607 start.go:83] releasing machines lock for "old-k8s-version-032002", held for 18.911866017s
	I0829 20:26:21.515278   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.515596   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:21.518247   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.518682   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.518710   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.518835   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.519413   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.519589   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.519680   67607 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:21.519736   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.519843   67607 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:21.519869   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.522261   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.522561   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.522614   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.522643   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.522763   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.522919   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.523044   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.523071   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.523073   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.523241   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.523240   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:21.523413   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.523560   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.523712   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:21.599524   67607 ssh_runner.go:195] Run: systemctl --version
	I0829 20:26:21.629122   67607 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:26:21.778437   67607 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:26:21.784642   67607 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:26:21.784714   67607 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:26:21.802019   67607 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:26:21.802043   67607 start.go:495] detecting cgroup driver to use...
	I0829 20:26:21.802100   67607 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:26:21.817407   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:26:21.831514   67607 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:26:21.831578   67607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:26:21.845224   67607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:26:21.858522   67607 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:26:21.972769   67607 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:26:22.115154   67607 docker.go:233] disabling docker service ...
	I0829 20:26:22.115240   67607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:26:22.130015   67607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:26:22.143186   67607 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:26:22.294113   67607 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:26:22.432373   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:26:22.446427   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:26:22.465151   67607 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0829 20:26:22.465218   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.476104   67607 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:26:22.476177   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.486627   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.497782   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.509869   67607 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:26:22.521347   67607 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:26:22.531406   67607 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:26:22.531455   67607 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:26:22.544949   67607 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:26:22.554918   67607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:22.687909   67607 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:26:22.808522   67607 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:26:22.808595   67607 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:26:22.814348   67607 start.go:563] Will wait 60s for crictl version
	I0829 20:26:22.814411   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:22.818348   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:26:22.863797   67607 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:26:22.863883   67607 ssh_runner.go:195] Run: crio --version
	I0829 20:26:22.893173   67607 ssh_runner.go:195] Run: crio --version
	I0829 20:26:22.923146   67607 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0829 20:26:22.924299   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:22.927222   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:22.927564   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:22.927589   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:22.927772   67607 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 20:26:22.932100   67607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:22.945139   67607 kubeadm.go:883] updating cluster {Name:old-k8s-version-032002 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:26:22.945274   67607 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 20:26:22.945334   67607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:22.990592   67607 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 20:26:22.990668   67607 ssh_runner.go:195] Run: which lz4
	I0829 20:26:22.995104   67607 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 20:26:22.999667   67607 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 20:26:22.999703   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0829 20:26:20.524280   66989 addons.go:510] duration metric: took 1.639608208s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0829 20:26:21.135090   66989 node_ready.go:53] node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:23.136839   66989 node_ready.go:53] node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:22.825998   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting to get IP...
	I0829 20:26:22.827278   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:22.827766   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:22.827883   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:22.827750   68757 retry.go:31] will retry after 212.207753ms: waiting for machine to come up
	I0829 20:26:23.041113   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.041553   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.041588   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:23.041508   68757 retry.go:31] will retry after 291.9464ms: waiting for machine to come up
	I0829 20:26:23.335081   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.336072   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.336121   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:23.336041   68757 retry.go:31] will retry after 478.578755ms: waiting for machine to come up
	I0829 20:26:23.816669   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.817178   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.817233   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:23.817087   68757 retry.go:31] will retry after 501.093836ms: waiting for machine to come up
	I0829 20:26:24.319836   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:24.320392   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:24.320418   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:24.320343   68757 retry.go:31] will retry after 524.430407ms: waiting for machine to come up
	I0829 20:26:24.846908   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:24.847388   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:24.847418   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:24.847361   68757 retry.go:31] will retry after 701.573237ms: waiting for machine to come up
	I0829 20:26:25.550328   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:25.550786   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:25.550811   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:25.550727   68757 retry.go:31] will retry after 916.084079ms: waiting for machine to come up
	I0829 20:26:26.468529   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:26.468981   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:26.469012   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:26.468921   68757 retry.go:31] will retry after 1.216322833s: waiting for machine to come up
	I0829 20:26:24.727216   67607 crio.go:462] duration metric: took 1.732148589s to copy over tarball
	I0829 20:26:24.727294   67607 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 20:26:27.715640   67607 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.988318238s)
	I0829 20:26:27.715664   67607 crio.go:469] duration metric: took 2.988419957s to extract the tarball
	I0829 20:26:27.715672   67607 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 20:26:27.764192   67607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:27.797388   67607 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 20:26:27.797422   67607 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 20:26:27.797501   67607 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:27.797536   67607 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0829 20:26:27.797549   67607 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:27.797557   67607 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0829 20:26:27.797511   67607 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:27.797629   67607 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:27.797637   67607 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:27.797519   67607 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:27.799128   67607 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:27.799208   67607 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0829 20:26:27.799251   67607 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0829 20:26:27.799361   67607 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:27.799386   67607 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:27.799463   67607 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:27.799697   67607 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:27.799830   67607 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:27.978022   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:27.978296   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:27.981616   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:27.998987   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.001078   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.004185   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.004672   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0829 20:26:28.103885   67607 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0829 20:26:28.103953   67607 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.104013   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.122203   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:28.129983   67607 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0829 20:26:28.130028   67607 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.130076   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.165427   67607 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0829 20:26:28.165470   67607 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.165521   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.199971   67607 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0829 20:26:28.199990   67607 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0829 20:26:28.200015   67607 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.200021   67607 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.200062   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.200105   67607 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0829 20:26:28.200155   67607 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.200199   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.200204   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.200062   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.200113   67607 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0829 20:26:28.200325   67607 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0829 20:26:28.200356   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.329091   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.329139   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.329187   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.329260   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.329316   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.329362   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:26:28.329316   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.484805   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.484857   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.484888   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.484943   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:26:28.484963   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.485009   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.487351   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.615121   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.615187   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.645371   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.645433   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:26:28.645524   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.645573   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.645638   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0829 20:26:28.729141   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0829 20:26:28.762530   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0829 20:26:28.762592   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0829 20:26:28.782117   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0829 20:26:28.782155   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0829 20:26:28.782195   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0829 20:26:28.782229   67607 cache_images.go:92] duration metric: took 984.791099ms to LoadCachedImages
	W0829 20:26:28.782293   67607 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0829 20:26:28.782310   67607 kubeadm.go:934] updating node { 192.168.39.116 8443 v1.20.0 crio true true} ...
	I0829 20:26:28.782452   67607 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-032002 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:26:28.782518   67607 ssh_runner.go:195] Run: crio config
	I0829 20:26:25.635616   66989 node_ready.go:53] node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:26.635463   66989 node_ready.go:49] node "embed-certs-388383" has status "Ready":"True"
	I0829 20:26:26.635488   66989 node_ready.go:38] duration metric: took 7.504259002s for node "embed-certs-388383" to be "Ready" ...
	I0829 20:26:26.635497   66989 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:26:26.641316   66989 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:26.649602   66989 pod_ready.go:93] pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:26.649634   66989 pod_ready.go:82] duration metric: took 8.284428ms for pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:26.649656   66989 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:28.658281   66989 pod_ready.go:103] pod "etcd-embed-certs-388383" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:27.686642   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:27.687071   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:27.687097   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:27.687030   68757 retry.go:31] will retry after 1.410599528s: waiting for machine to come up
	I0829 20:26:29.099622   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:29.100175   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:29.100207   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:29.100083   68757 retry.go:31] will retry after 1.929618787s: waiting for machine to come up
	I0829 20:26:31.031864   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:31.032434   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:31.032467   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:31.032367   68757 retry.go:31] will retry after 1.926271655s: waiting for machine to come up
	I0829 20:26:28.832785   67607 cni.go:84] Creating CNI manager for ""
	I0829 20:26:28.832807   67607 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:28.832824   67607 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:26:28.832843   67607 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.116 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-032002 NodeName:old-k8s-version-032002 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0829 20:26:28.832982   67607 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-032002"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:26:28.833059   67607 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0829 20:26:28.843483   67607 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:26:28.843566   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:26:28.853276   67607 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0829 20:26:28.870579   67607 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:26:28.888053   67607 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0829 20:26:28.905988   67607 ssh_runner.go:195] Run: grep 192.168.39.116	control-plane.minikube.internal$ /etc/hosts
	I0829 20:26:28.910048   67607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:28.924996   67607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:29.075015   67607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:29.095381   67607 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002 for IP: 192.168.39.116
	I0829 20:26:29.095411   67607 certs.go:194] generating shared ca certs ...
	I0829 20:26:29.095430   67607 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:29.095605   67607 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:26:29.095686   67607 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:26:29.095706   67607 certs.go:256] generating profile certs ...
	I0829 20:26:29.095847   67607 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.key
	I0829 20:26:29.095928   67607 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.key.a1a2aebb
	I0829 20:26:29.095984   67607 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.key
	I0829 20:26:29.096135   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:26:29.096184   67607 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:26:29.096198   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:26:29.096227   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:26:29.096259   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:26:29.096299   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:26:29.096378   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:29.097276   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:26:29.144259   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:26:29.171420   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:26:29.198554   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:26:29.230750   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0829 20:26:29.269978   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 20:26:29.299839   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:26:29.333742   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 20:26:29.358352   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:26:29.382648   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:26:29.406773   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:26:29.434106   67607 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:26:29.451913   67607 ssh_runner.go:195] Run: openssl version
	I0829 20:26:29.457722   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:26:29.469147   67607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:26:29.474048   67607 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:26:29.474094   67607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:26:29.480082   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:26:29.491083   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:26:29.501994   67607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:29.508594   67607 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:29.508643   67607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:29.516331   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:26:29.531067   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:26:29.543998   67607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:26:29.548781   67607 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:26:29.548845   67607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:26:29.555052   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:26:29.567902   67607 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:26:29.572879   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:26:29.579506   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:26:29.585887   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:26:29.592262   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:26:29.598566   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:26:29.604672   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:26:29.610830   67607 kubeadm.go:392] StartCluster: {Name:old-k8s-version-032002 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:26:29.612915   67607 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:26:29.613015   67607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:29.655224   67607 cri.go:89] found id: ""
	I0829 20:26:29.655314   67607 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:26:29.666216   67607 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:26:29.666241   67607 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:26:29.666292   67607 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:26:29.676908   67607 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:26:29.678276   67607 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-032002" does not appear in /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:26:29.679313   67607 kubeconfig.go:62] /home/jenkins/minikube-integration/19530-11185/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-032002" cluster setting kubeconfig missing "old-k8s-version-032002" context setting]
	I0829 20:26:29.680756   67607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:29.764872   67607 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:26:29.776873   67607 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.116
	I0829 20:26:29.776914   67607 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:26:29.776926   67607 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:26:29.776987   67607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:29.819268   67607 cri.go:89] found id: ""
	I0829 20:26:29.819347   67607 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:26:29.840386   67607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:26:29.851624   67607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:26:29.851650   67607 kubeadm.go:157] found existing configuration files:
	
	I0829 20:26:29.851710   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:26:29.861439   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:26:29.861504   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:26:29.871594   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:26:29.881126   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:26:29.881199   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:26:29.890984   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:26:29.900838   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:26:29.900913   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:26:29.910677   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:26:29.920008   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:26:29.920073   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:26:29.929631   67607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:26:29.939864   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:30.096029   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:30.816696   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:31.043310   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:31.139291   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:31.248095   67607 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:26:31.248190   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:31.749101   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:32.248718   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:32.748783   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:33.248254   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:33.748557   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:30.180025   66989 pod_ready.go:93] pod "etcd-embed-certs-388383" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:30.180056   66989 pod_ready.go:82] duration metric: took 3.530390258s for pod "etcd-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:30.180069   66989 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.187272   66989 pod_ready.go:93] pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:32.187300   66989 pod_ready.go:82] duration metric: took 2.007222016s for pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.187313   66989 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.192038   66989 pod_ready.go:93] pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:32.192062   66989 pod_ready.go:82] duration metric: took 4.740656ms for pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.192075   66989 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fcxs4" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.196712   66989 pod_ready.go:93] pod "kube-proxy-fcxs4" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:32.196736   66989 pod_ready.go:82] duration metric: took 4.653538ms for pod "kube-proxy-fcxs4" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.196748   66989 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.200491   66989 pod_ready.go:93] pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:32.200517   66989 pod_ready.go:82] duration metric: took 3.758002ms for pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.200528   66989 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:34.207857   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:32.960872   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:32.961256   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:32.961284   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:32.961208   68757 retry.go:31] will retry after 2.304628323s: waiting for machine to come up
	I0829 20:26:35.267593   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:35.268009   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:35.268041   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:35.267970   68757 retry.go:31] will retry after 3.753063387s: waiting for machine to come up
	I0829 20:26:34.249231   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:34.748279   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:35.249171   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:35.748943   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:36.249181   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:36.748307   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:37.248484   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:37.748261   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:38.248332   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:38.748423   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:36.705814   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:38.708205   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:40.175557   66841 start.go:364] duration metric: took 53.54411059s to acquireMachinesLock for "no-preload-397724"
	I0829 20:26:40.175617   66841 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:26:40.175626   66841 fix.go:54] fixHost starting: 
	I0829 20:26:40.176060   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:40.176098   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:40.193828   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45897
	I0829 20:26:40.194231   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:40.194840   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:26:40.194867   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:40.195175   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:40.195364   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:40.195528   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:26:40.197109   66841 fix.go:112] recreateIfNeeded on no-preload-397724: state=Stopped err=<nil>
	I0829 20:26:40.197128   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	W0829 20:26:40.197278   66841 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:26:40.199263   66841 out.go:177] * Restarting existing kvm2 VM for "no-preload-397724" ...
	I0829 20:26:39.023902   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.024374   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Found IP for machine: 192.168.72.140
	I0829 20:26:39.024399   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has current primary IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.024413   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Reserving static IP address...
	I0829 20:26:39.024832   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Reserved static IP address: 192.168.72.140
	I0829 20:26:39.024856   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for SSH to be available...
	I0829 20:26:39.024894   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-145096", mac: "52:54:00:36:fe:e0", ip: "192.168.72.140"} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.024925   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | skip adding static IP to network mk-default-k8s-diff-port-145096 - found existing host DHCP lease matching {name: "default-k8s-diff-port-145096", mac: "52:54:00:36:fe:e0", ip: "192.168.72.140"}
	I0829 20:26:39.024947   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Getting to WaitForSSH function...
	I0829 20:26:39.026796   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.027100   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.027129   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.027265   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Using SSH client type: external
	I0829 20:26:39.027288   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa (-rw-------)
	I0829 20:26:39.027318   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:39.027333   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | About to run SSH command:
	I0829 20:26:39.027346   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | exit 0
	I0829 20:26:39.146830   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:39.147242   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetConfigRaw
	I0829 20:26:39.147931   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetIP
	I0829 20:26:39.150652   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.151055   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.151084   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.151395   68084 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/config.json ...
	I0829 20:26:39.151581   68084 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:39.151601   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:39.151814   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.153861   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.154189   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.154222   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.154351   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.154575   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.154746   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.154875   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.155010   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:39.155219   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:39.155235   68084 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:39.258973   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:39.259006   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetMachineName
	I0829 20:26:39.259261   68084 buildroot.go:166] provisioning hostname "default-k8s-diff-port-145096"
	I0829 20:26:39.259292   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetMachineName
	I0829 20:26:39.259467   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.262018   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.262472   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.262501   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.262707   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.262886   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.263034   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.263185   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.263344   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:39.263530   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:39.263547   68084 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-145096 && echo "default-k8s-diff-port-145096" | sudo tee /etc/hostname
	I0829 20:26:39.379437   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-145096
	
	I0829 20:26:39.379479   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.382263   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.382682   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.382704   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.382913   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.383128   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.383280   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.383389   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.383520   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:39.383675   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:39.383692   68084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-145096' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-145096/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-145096' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:39.491756   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:39.491790   68084 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:39.491855   68084 buildroot.go:174] setting up certificates
	I0829 20:26:39.491869   68084 provision.go:84] configureAuth start
	I0829 20:26:39.491883   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetMachineName
	I0829 20:26:39.492150   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetIP
	I0829 20:26:39.494882   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.495241   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.495269   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.495452   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.497708   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.497980   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.498013   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.498097   68084 provision.go:143] copyHostCerts
	I0829 20:26:39.498157   68084 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:39.498179   68084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:39.498249   68084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:39.498347   68084 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:39.498356   68084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:39.498377   68084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:39.498430   68084 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:39.498437   68084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:39.498455   68084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:39.498507   68084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-145096 san=[127.0.0.1 192.168.72.140 default-k8s-diff-port-145096 localhost minikube]
	I0829 20:26:39.584313   68084 provision.go:177] copyRemoteCerts
	I0829 20:26:39.584372   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:39.584398   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.587054   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.587377   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.587400   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.587630   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.587823   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.587952   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.588087   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:26:39.664394   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:39.688852   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0829 20:26:39.714653   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 20:26:39.737662   68084 provision.go:87] duration metric: took 245.781265ms to configureAuth
	I0829 20:26:39.737687   68084 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:39.737844   68084 config.go:182] Loaded profile config "default-k8s-diff-port-145096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:26:39.737911   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.740391   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.740659   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.740688   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.740911   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.741107   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.741256   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.741434   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.741612   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:39.741777   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:39.741794   68084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:39.954811   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:39.954846   68084 machine.go:96] duration metric: took 803.251945ms to provisionDockerMachine
	I0829 20:26:39.954862   68084 start.go:293] postStartSetup for "default-k8s-diff-port-145096" (driver="kvm2")
	I0829 20:26:39.954877   68084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:39.954898   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:39.955237   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:39.955267   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.958071   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.958575   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.958605   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.958772   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.958969   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.959126   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.959287   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:26:40.037153   68084 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:40.041150   68084 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:40.041176   68084 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:40.041235   68084 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:40.041325   68084 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:40.041415   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:40.050654   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:40.073789   68084 start.go:296] duration metric: took 118.907407ms for postStartSetup
	I0829 20:26:40.073826   68084 fix.go:56] duration metric: took 18.558388385s for fixHost
	I0829 20:26:40.073846   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:40.076397   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.076749   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:40.076789   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.076999   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:40.077200   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:40.077374   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:40.077480   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:40.077598   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:40.077754   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:40.077765   68084 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:40.175410   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963200.123461148
	
	I0829 20:26:40.175431   68084 fix.go:216] guest clock: 1724963200.123461148
	I0829 20:26:40.175437   68084 fix.go:229] Guest: 2024-08-29 20:26:40.123461148 +0000 UTC Remote: 2024-08-29 20:26:40.073830105 +0000 UTC m=+143.488576066 (delta=49.631043ms)
	I0829 20:26:40.175456   68084 fix.go:200] guest clock delta is within tolerance: 49.631043ms
	I0829 20:26:40.175463   68084 start.go:83] releasing machines lock for "default-k8s-diff-port-145096", held for 18.660059953s
	I0829 20:26:40.175497   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:40.175781   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetIP
	I0829 20:26:40.179031   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.179457   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:40.179495   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.179695   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:40.180256   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:40.180444   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:40.180528   68084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:40.180581   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:40.180706   68084 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:40.180729   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:40.183580   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.183819   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.183963   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:40.183989   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.184172   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:40.184174   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:40.184213   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.184345   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:40.184416   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:40.184511   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:40.184624   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:40.184626   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:40.184794   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:26:40.184896   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:26:40.259854   68084 ssh_runner.go:195] Run: systemctl --version
	I0829 20:26:40.290102   68084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:26:40.439112   68084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:26:40.449465   68084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:26:40.449546   68084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:26:40.471182   68084 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:26:40.471209   68084 start.go:495] detecting cgroup driver to use...
	I0829 20:26:40.471276   68084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:26:40.492605   68084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:26:40.508500   68084 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:26:40.508561   68084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:26:40.527534   68084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:26:40.542013   68084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:26:40.663843   68084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:26:40.837228   68084 docker.go:233] disabling docker service ...
	I0829 20:26:40.837293   68084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:26:40.854285   68084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:26:40.870148   68084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:26:41.017156   68084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:26:41.150436   68084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:26:41.165239   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:26:41.184783   68084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 20:26:41.184847   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.197358   68084 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:26:41.197417   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.211222   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.225297   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.237205   68084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:26:41.249875   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.261928   68084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.286145   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.299119   68084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:26:41.313001   68084 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:26:41.313062   68084 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:26:41.335390   68084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:26:41.348803   68084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:41.464387   68084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:26:41.564675   68084 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:26:41.564746   68084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:26:41.569620   68084 start.go:563] Will wait 60s for crictl version
	I0829 20:26:41.569680   68084 ssh_runner.go:195] Run: which crictl
	I0829 20:26:41.573519   68084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:26:41.615105   68084 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:26:41.615190   68084 ssh_runner.go:195] Run: crio --version
	I0829 20:26:41.644597   68084 ssh_runner.go:195] Run: crio --version
	I0829 20:26:41.678211   68084 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 20:26:39.248306   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:39.748958   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:40.248975   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:40.748948   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:41.249144   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:41.749013   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:42.248363   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:42.748624   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:43.248833   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:43.748535   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:40.200748   66841 main.go:141] libmachine: (no-preload-397724) Calling .Start
	I0829 20:26:40.200955   66841 main.go:141] libmachine: (no-preload-397724) Ensuring networks are active...
	I0829 20:26:40.201793   66841 main.go:141] libmachine: (no-preload-397724) Ensuring network default is active
	I0829 20:26:40.202128   66841 main.go:141] libmachine: (no-preload-397724) Ensuring network mk-no-preload-397724 is active
	I0829 20:26:40.202729   66841 main.go:141] libmachine: (no-preload-397724) Getting domain xml...
	I0829 20:26:40.203538   66841 main.go:141] libmachine: (no-preload-397724) Creating domain...
	I0829 20:26:41.516739   66841 main.go:141] libmachine: (no-preload-397724) Waiting to get IP...
	I0829 20:26:41.517840   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:41.518273   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:41.518353   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:41.518262   68926 retry.go:31] will retry after 295.070588ms: waiting for machine to come up
	I0829 20:26:41.814782   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:41.815346   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:41.815369   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:41.815291   68926 retry.go:31] will retry after 239.48527ms: waiting for machine to come up
	I0829 20:26:42.056957   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:42.057459   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:42.057509   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:42.057436   68926 retry.go:31] will retry after 452.012872ms: waiting for machine to come up
	I0829 20:26:42.511068   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:42.511551   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:42.511590   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:42.511520   68926 retry.go:31] will retry after 552.227159ms: waiting for machine to come up
	I0829 20:26:43.066096   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:43.066642   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:43.066673   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:43.066605   68926 retry.go:31] will retry after 666.699647ms: waiting for machine to come up
	I0829 20:26:43.734695   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:43.735402   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:43.735430   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:43.735309   68926 retry.go:31] will retry after 770.756485ms: waiting for machine to come up
	I0829 20:26:40.709553   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:42.712799   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:41.679441   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetIP
	I0829 20:26:41.682807   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:41.683205   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:41.683236   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:41.683489   68084 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0829 20:26:41.688766   68084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:41.705764   68084 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-145096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:26:41.705918   68084 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:26:41.705977   68084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:41.752884   68084 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 20:26:41.752955   68084 ssh_runner.go:195] Run: which lz4
	I0829 20:26:41.757600   68084 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 20:26:41.762158   68084 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 20:26:41.762188   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 20:26:43.201094   68084 crio.go:462] duration metric: took 1.443534343s to copy over tarball
	I0829 20:26:43.201176   68084 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 20:26:45.400911   68084 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.199703125s)
	I0829 20:26:45.400942   68084 crio.go:469] duration metric: took 2.199820098s to extract the tarball
	I0829 20:26:45.400948   68084 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 20:26:45.439120   68084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:45.482658   68084 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 20:26:45.482679   68084 cache_images.go:84] Images are preloaded, skipping loading
	I0829 20:26:45.482687   68084 kubeadm.go:934] updating node { 192.168.72.140 8444 v1.31.0 crio true true} ...
	I0829 20:26:45.482801   68084 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-145096 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:26:45.482873   68084 ssh_runner.go:195] Run: crio config
	I0829 20:26:45.532108   68084 cni.go:84] Creating CNI manager for ""
	I0829 20:26:45.532132   68084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:45.532146   68084 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:26:45.532169   68084 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.140 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-145096 NodeName:default-k8s-diff-port-145096 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 20:26:45.532310   68084 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.140
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-145096"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:26:45.532367   68084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 20:26:45.542670   68084 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:26:45.542744   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:26:45.552622   68084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0829 20:26:45.569765   68084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:26:45.590972   68084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0829 20:26:45.611421   68084 ssh_runner.go:195] Run: grep 192.168.72.140	control-plane.minikube.internal$ /etc/hosts
	I0829 20:26:45.615585   68084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:45.627911   68084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:45.757504   68084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:45.776103   68084 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096 for IP: 192.168.72.140
	I0829 20:26:45.776128   68084 certs.go:194] generating shared ca certs ...
	I0829 20:26:45.776159   68084 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:45.776337   68084 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:26:45.776388   68084 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:26:45.776400   68084 certs.go:256] generating profile certs ...
	I0829 20:26:45.776511   68084 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/client.key
	I0829 20:26:45.776600   68084 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/apiserver.key.5a49b6b2
	I0829 20:26:45.776650   68084 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/proxy-client.key
	I0829 20:26:45.776788   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:26:45.776827   68084 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:26:45.776840   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:26:45.776869   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:26:45.776940   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:26:45.776977   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:26:45.777035   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:45.777916   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:26:45.823419   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:26:45.868291   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:26:45.905178   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:26:45.934956   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0829 20:26:45.967570   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 20:26:45.994332   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:26:46.019268   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 20:26:46.044075   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:26:46.067906   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:26:46.092513   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:26:46.117686   68084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:26:46.137048   68084 ssh_runner.go:195] Run: openssl version
	I0829 20:26:46.143203   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:26:46.156407   68084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:46.161397   68084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:46.161461   68084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:46.167587   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:26:46.179034   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:26:46.190204   68084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:26:46.194953   68084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:26:46.195010   68084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:26:46.203121   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:26:46.218606   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:26:46.233586   68084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:26:46.240100   68084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:26:46.240155   68084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:26:46.247473   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:26:46.259417   68084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:26:46.264875   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:26:46.270914   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:26:46.277211   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:26:46.283138   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:26:46.289137   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:26:46.295044   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:26:46.301027   68084 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-145096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:26:46.301120   68084 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:26:46.301177   68084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:46.342913   68084 cri.go:89] found id: ""
	I0829 20:26:46.342988   68084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:26:46.354198   68084 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:26:46.354221   68084 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:26:46.354269   68084 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:26:46.364173   68084 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:26:46.365182   68084 kubeconfig.go:125] found "default-k8s-diff-port-145096" server: "https://192.168.72.140:8444"
	I0829 20:26:46.367560   68084 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:26:46.377550   68084 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.140
	I0829 20:26:46.377584   68084 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:26:46.377596   68084 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:26:46.377647   68084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:46.419141   68084 cri.go:89] found id: ""
	I0829 20:26:46.419215   68084 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:26:46.438037   68084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:26:46.449021   68084 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:26:46.449041   68084 kubeadm.go:157] found existing configuration files:
	
	I0829 20:26:46.449093   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0829 20:26:46.459396   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:26:46.459445   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:26:46.469964   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0829 20:26:46.479604   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:26:46.479655   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:26:46.492672   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0829 20:26:46.504656   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:26:46.504714   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:26:46.520206   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0829 20:26:46.532067   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:26:46.532137   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:26:46.541931   68084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:26:46.551973   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:44.248615   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:44.748528   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:45.248257   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:45.748453   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:46.248927   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:46.748628   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:47.248556   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:47.748332   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.248373   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.749111   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:44.507808   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:44.508340   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:44.508375   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:44.508288   68926 retry.go:31] will retry after 754.614285ms: waiting for machine to come up
	I0829 20:26:45.264587   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:45.265039   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:45.265065   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:45.265003   68926 retry.go:31] will retry after 1.3758308s: waiting for machine to come up
	I0829 20:26:46.642139   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:46.642666   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:46.642690   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:46.642612   68926 retry.go:31] will retry after 1.255043608s: waiting for machine to come up
	I0829 20:26:47.899849   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:47.900330   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:47.900360   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:47.900291   68926 retry.go:31] will retry after 1.517293529s: waiting for machine to come up
	I0829 20:26:45.208067   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:48.177040   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:46.668397   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:47.497182   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:47.725573   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:47.785427   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:47.850878   68084 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:26:47.850972   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.351404   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.852023   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.351402   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.367249   68084 api_server.go:72] duration metric: took 1.516370766s to wait for apiserver process to appear ...
	I0829 20:26:49.367283   68084 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:26:49.367312   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:51.595653   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:26:51.595683   68084 api_server.go:103] status: https://192.168.72.140:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:26:51.595698   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:51.609883   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:26:51.609989   68084 api_server.go:103] status: https://192.168.72.140:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:26:51.867454   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:51.872297   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:51.872328   68084 api_server.go:103] status: https://192.168.72.140:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:52.367462   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:52.375300   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:52.375333   68084 api_server.go:103] status: https://192.168.72.140:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:52.867827   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:52.872814   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 200:
	ok
	I0829 20:26:52.881061   68084 api_server.go:141] control plane version: v1.31.0
	I0829 20:26:52.881092   68084 api_server.go:131] duration metric: took 3.513801329s to wait for apiserver health ...
	I0829 20:26:52.881102   68084 cni.go:84] Creating CNI manager for ""
	I0829 20:26:52.881111   68084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:52.882993   68084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:26:49.248291   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.748360   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:50.248427   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:50.749087   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:51.248381   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:51.748488   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:52.249250   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:52.748715   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:53.249248   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:53.748915   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.419781   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:49.420286   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:49.420314   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:49.420244   68926 retry.go:31] will retry after 2.638145598s: waiting for machine to come up
	I0829 20:26:52.059935   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:52.060367   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:52.060411   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:52.060341   68926 retry.go:31] will retry after 2.696474949s: waiting for machine to come up
	I0829 20:26:50.207945   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:52.709407   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:52.884310   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:26:52.901134   68084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:26:52.931390   68084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:26:52.952109   68084 system_pods.go:59] 8 kube-system pods found
	I0829 20:26:52.952154   68084 system_pods.go:61] "coredns-6f6b679f8f-5mkxp" [1d3c3a01-1fa6-4d1d-8750-deef4475ba96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:26:52.952166   68084 system_pods.go:61] "etcd-default-k8s-diff-port-145096" [03096d69-48af-4372-9fa0-5a45dcb9603c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 20:26:52.952177   68084 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-145096" [4be8793a-7934-4c89-a840-49e769673f5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 20:26:52.952188   68084 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-145096" [a3bec7f8-8163-4afa-af53-282ad755b788] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 20:26:52.952202   68084 system_pods.go:61] "kube-proxy-b4ffx" [d97e74d5-21d4-4c96-9d94-77767fc4e609] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 20:26:52.952210   68084 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-145096" [c416b52b-ebf4-4714-bed6-3d25bfaa373c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 20:26:52.952217   68084 system_pods.go:61] "metrics-server-6867b74b74-5kk6q" [e74224b1-8242-4f7f-b8d6-7d9d4839be53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:26:52.952224   68084 system_pods.go:61] "storage-provisioner" [4e97da7c-af4b-40b3-83fb-82b6c2a2adef] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 20:26:52.952236   68084 system_pods.go:74] duration metric: took 20.81979ms to wait for pod list to return data ...
	I0829 20:26:52.952245   68084 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:26:52.961169   68084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:26:52.961202   68084 node_conditions.go:123] node cpu capacity is 2
	I0829 20:26:52.961214   68084 node_conditions.go:105] duration metric: took 8.963546ms to run NodePressure ...
	I0829 20:26:52.961234   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:53.425201   68084 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 20:26:53.429605   68084 kubeadm.go:739] kubelet initialised
	I0829 20:26:53.429625   68084 kubeadm.go:740] duration metric: took 4.401784ms waiting for restarted kubelet to initialise ...
	I0829 20:26:53.429632   68084 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:26:53.434501   68084 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-5mkxp" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:55.442290   68084 pod_ready.go:103] pod "coredns-6f6b679f8f-5mkxp" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:54.248998   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:54.748438   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:55.249066   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:55.749293   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:56.248457   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:56.748509   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:57.248949   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:57.748228   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:58.248717   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:58.748412   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:54.760175   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:54.760689   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:54.760736   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:54.760667   68926 retry.go:31] will retry after 3.651969786s: waiting for machine to come up
	I0829 20:26:58.415601   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.416019   66841 main.go:141] libmachine: (no-preload-397724) Found IP for machine: 192.168.50.214
	I0829 20:26:58.416045   66841 main.go:141] libmachine: (no-preload-397724) Reserving static IP address...
	I0829 20:26:58.416063   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has current primary IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.416507   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "no-preload-397724", mac: "52:54:00:e9:bf:ac", ip: "192.168.50.214"} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.416533   66841 main.go:141] libmachine: (no-preload-397724) DBG | skip adding static IP to network mk-no-preload-397724 - found existing host DHCP lease matching {name: "no-preload-397724", mac: "52:54:00:e9:bf:ac", ip: "192.168.50.214"}
	I0829 20:26:58.416543   66841 main.go:141] libmachine: (no-preload-397724) Reserved static IP address: 192.168.50.214
	I0829 20:26:58.416552   66841 main.go:141] libmachine: (no-preload-397724) Waiting for SSH to be available...
	I0829 20:26:58.416562   66841 main.go:141] libmachine: (no-preload-397724) DBG | Getting to WaitForSSH function...
	I0829 20:26:58.418849   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.419170   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.419199   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.419312   66841 main.go:141] libmachine: (no-preload-397724) DBG | Using SSH client type: external
	I0829 20:26:58.419351   66841 main.go:141] libmachine: (no-preload-397724) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa (-rw-------)
	I0829 20:26:58.419397   66841 main.go:141] libmachine: (no-preload-397724) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:58.419414   66841 main.go:141] libmachine: (no-preload-397724) DBG | About to run SSH command:
	I0829 20:26:58.419444   66841 main.go:141] libmachine: (no-preload-397724) DBG | exit 0
	I0829 20:26:58.542594   66841 main.go:141] libmachine: (no-preload-397724) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:58.542925   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetConfigRaw
	I0829 20:26:58.543582   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetIP
	I0829 20:26:58.546057   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.546384   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.546422   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.546691   66841 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/config.json ...
	I0829 20:26:58.546871   66841 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:58.546890   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:58.547113   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:58.549493   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.549816   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.549854   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.549972   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:58.550140   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.550260   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.550388   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:58.550581   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:58.550805   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:58.550822   66841 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:58.658784   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:58.658827   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:26:58.659063   66841 buildroot.go:166] provisioning hostname "no-preload-397724"
	I0829 20:26:58.659083   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:26:58.659220   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:58.661932   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.662294   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.662320   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.662485   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:58.662695   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.662880   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.663011   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:58.663168   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:58.663343   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:58.663356   66841 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-397724 && echo "no-preload-397724" | sudo tee /etc/hostname
	I0829 20:26:58.790591   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-397724
	
	I0829 20:26:58.790618   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:58.793294   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.793612   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.793639   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.793849   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:58.794035   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.794192   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.794289   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:58.794430   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:58.794656   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:58.794678   66841 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-397724' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-397724/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-397724' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:58.915925   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:58.915958   66841 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:58.915981   66841 buildroot.go:174] setting up certificates
	I0829 20:26:58.915991   66841 provision.go:84] configureAuth start
	I0829 20:26:58.916000   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:26:58.916279   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetIP
	I0829 20:26:58.919034   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.919385   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.919415   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.919523   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:58.921483   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.921805   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.921831   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.922015   66841 provision.go:143] copyHostCerts
	I0829 20:26:58.922062   66841 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:58.922079   66841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:58.922135   66841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:58.922242   66841 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:58.922256   66841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:58.922288   66841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:58.922365   66841 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:58.922375   66841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:58.922400   66841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:58.922491   66841 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.no-preload-397724 san=[127.0.0.1 192.168.50.214 localhost minikube no-preload-397724]
	I0829 20:26:55.206462   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:57.207175   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:59.207454   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:59.264390   66841 provision.go:177] copyRemoteCerts
	I0829 20:26:59.264446   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:59.264467   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.267259   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.267603   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.267626   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.267794   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.268014   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.268190   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.268367   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:26:59.353746   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:59.378289   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0829 20:26:59.402330   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 20:26:59.425412   66841 provision.go:87] duration metric: took 509.408381ms to configureAuth
	I0829 20:26:59.425442   66841 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:59.425616   66841 config.go:182] Loaded profile config "no-preload-397724": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:26:59.425679   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.428148   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.428503   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.428545   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.428698   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.428906   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.429077   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.429227   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.429365   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:59.429511   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:59.429524   66841 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:59.666382   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:59.666408   66841 machine.go:96] duration metric: took 1.11952301s to provisionDockerMachine
	I0829 20:26:59.666422   66841 start.go:293] postStartSetup for "no-preload-397724" (driver="kvm2")
	I0829 20:26:59.666436   66841 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:59.666458   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.666833   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:59.666881   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.669407   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.669725   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.669751   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.669888   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.670073   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.670214   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.670316   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:26:59.753440   66841 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:59.758408   66841 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:59.758431   66841 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:59.758509   66841 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:59.758632   66841 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:59.758753   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:59.768355   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:59.792742   66841 start.go:296] duration metric: took 126.308201ms for postStartSetup
	I0829 20:26:59.792782   66841 fix.go:56] duration metric: took 19.617155195s for fixHost
	I0829 20:26:59.792806   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.795380   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.795744   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.795781   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.795917   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.796124   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.796237   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.796376   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.796488   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:59.796668   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:59.796680   66841 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:59.903539   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963219.868600963
	
	I0829 20:26:59.903564   66841 fix.go:216] guest clock: 1724963219.868600963
	I0829 20:26:59.903574   66841 fix.go:229] Guest: 2024-08-29 20:26:59.868600963 +0000 UTC Remote: 2024-08-29 20:26:59.792787483 +0000 UTC m=+355.719318860 (delta=75.81348ms)
	I0829 20:26:59.903623   66841 fix.go:200] guest clock delta is within tolerance: 75.81348ms
	I0829 20:26:59.903632   66841 start.go:83] releasing machines lock for "no-preload-397724", held for 19.728042303s
	I0829 20:26:59.903676   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.903967   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetIP
	I0829 20:26:59.906798   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.907183   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.907212   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.907378   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.907804   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.907970   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.908038   66841 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:59.908072   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.908324   66841 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:59.908346   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.910843   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.911025   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.911187   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.911215   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.911325   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.911415   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.911437   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.911485   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.911640   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.911649   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.911847   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.911848   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:26:59.911978   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.912119   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:27:00.023116   66841 ssh_runner.go:195] Run: systemctl --version
	I0829 20:27:00.029346   66841 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:27:00.169122   66841 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:27:00.176823   66841 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:27:00.176913   66841 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:27:00.194795   66841 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:27:00.194836   66841 start.go:495] detecting cgroup driver to use...
	I0829 20:27:00.194906   66841 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:27:00.212145   66841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:27:00.226584   66841 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:27:00.226656   66841 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:27:00.240525   66841 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:27:00.256847   66841 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:27:00.371938   66841 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:27:00.516891   66841 docker.go:233] disabling docker service ...
	I0829 20:27:00.516964   66841 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:27:00.531127   66841 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:27:00.543483   66841 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:27:00.672033   66841 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:27:00.794828   66841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:27:00.809204   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:27:00.828484   66841 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 20:27:00.828547   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.839273   66841 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:27:00.839344   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.850336   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.860980   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.871661   66841 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:27:00.884343   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.895190   66841 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.912700   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.923383   66841 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:27:00.934168   66841 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:27:00.934231   66841 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:27:00.948181   66841 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:27:00.959121   66841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:27:01.072055   66841 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:27:01.163024   66841 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:27:01.163104   66841 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:27:01.167949   66841 start.go:563] Will wait 60s for crictl version
	I0829 20:27:01.168011   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.171707   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:27:01.212950   66841 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:27:01.213031   66841 ssh_runner.go:195] Run: crio --version
	I0829 20:27:01.242181   66841 ssh_runner.go:195] Run: crio --version
	I0829 20:27:01.276389   66841 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 20:26:57.441729   68084 pod_ready.go:93] pod "coredns-6f6b679f8f-5mkxp" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:57.441753   68084 pod_ready.go:82] duration metric: took 4.007206558s for pod "coredns-6f6b679f8f-5mkxp" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:57.441762   68084 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:59.448210   68084 pod_ready.go:103] pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:59.248692   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:59.748815   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:00.248257   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:00.748264   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:01.249241   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:01.748894   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:02.249045   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:02.748765   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:03.248902   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:03.748333   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:01.277829   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetIP
	I0829 20:27:01.280762   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:27:01.281144   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:27:01.281171   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:27:01.281367   66841 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0829 20:27:01.285714   66841 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:27:01.297903   66841 kubeadm.go:883] updating cluster {Name:no-preload-397724 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-397724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:27:01.298010   66841 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:27:01.298041   66841 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:27:01.331474   66841 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 20:27:01.331498   66841 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 20:27:01.331566   66841 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:01.331572   66841 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.331609   66841 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.331632   66841 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.331643   66841 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.331615   66841 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0829 20:27:01.331737   66841 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.331758   66841 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.333182   66841 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.333233   66841 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.333206   66841 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.333195   66841 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.333191   66841 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:01.333278   66841 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.333191   66841 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.333333   66841 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0829 20:27:01.507028   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.514096   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.526653   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.530292   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.531828   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.534432   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.550465   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0829 20:27:01.613161   66841 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0829 20:27:01.613209   66841 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.613287   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.631193   66841 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0829 20:27:01.631236   66841 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.631285   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.687868   66841 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0829 20:27:01.687911   66841 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.687967   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.700369   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:01.713036   66841 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0829 20:27:01.713102   66841 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.713159   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.722934   66841 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0829 20:27:01.722991   66841 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.723042   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.722941   66841 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0829 20:27:01.723130   66841 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.723159   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.785242   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.785246   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.785342   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.785391   66841 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0829 20:27:01.785438   66841 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:01.785450   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.785474   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.785479   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.785534   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.925322   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.925371   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.925374   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.925474   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.925518   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.925569   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.925593   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:02.072628   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:02.072690   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:02.072744   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:02.072822   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:02.072867   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:02.176999   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0829 20:27:02.177031   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:02.177503   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:02.177507   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 20:27:02.177572   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0829 20:27:02.177581   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0829 20:27:02.177678   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0829 20:27:02.177682   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 20:27:02.185515   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0829 20:27:02.185585   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:02.185624   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0829 20:27:02.259015   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0829 20:27:02.259076   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0829 20:27:02.259087   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0829 20:27:02.259106   66841 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 20:27:02.259113   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0829 20:27:02.259138   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0829 20:27:02.259147   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 20:27:02.259155   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 20:27:02.259152   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0829 20:27:02.259139   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0829 20:27:02.259157   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 20:27:02.259240   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0829 20:27:01.208076   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:03.208339   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:01.954153   68084 pod_ready.go:103] pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:03.454991   68084 pod_ready.go:93] pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:03.455023   68084 pod_ready.go:82] duration metric: took 6.013253793s for pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:03.455036   68084 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:05.461938   68084 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:04.249082   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:04.748738   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:05.248398   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:05.749056   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:06.248693   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:06.748904   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:07.249145   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:07.749131   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:08.248774   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:08.748444   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:04.630344   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.371149915s)
	I0829 20:27:04.630373   66841 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0: (2.371188324s)
	I0829 20:27:04.630410   66841 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.371191825s)
	I0829 20:27:04.630432   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0829 20:27:04.630413   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0829 20:27:04.630379   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0829 20:27:04.630465   66841 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.371187188s)
	I0829 20:27:04.630478   66841 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 20:27:04.630481   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0829 20:27:04.630561   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 20:27:06.684986   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.054398317s)
	I0829 20:27:06.685019   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0829 20:27:06.685047   66841 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0829 20:27:06.685098   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0829 20:27:05.707657   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:07.708034   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:06.965873   68084 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:06.965904   68084 pod_ready.go:82] duration metric: took 3.51085868s for pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.965918   68084 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.976464   68084 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:06.976489   68084 pod_ready.go:82] duration metric: took 10.562771ms for pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.976502   68084 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b4ffx" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.982178   68084 pod_ready.go:93] pod "kube-proxy-b4ffx" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:06.982197   68084 pod_ready.go:82] duration metric: took 5.687889ms for pod "kube-proxy-b4ffx" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.982205   68084 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.987316   68084 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:06.987333   68084 pod_ready.go:82] duration metric: took 5.122275ms for pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.987342   68084 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:08.994794   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:11.493940   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:09.248746   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:09.748722   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:10.249074   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:10.748647   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:11.248236   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:11.749057   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:12.249227   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:12.748688   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:13.249248   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:13.749298   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:10.365120   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.679993065s)
	I0829 20:27:10.365150   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0829 20:27:10.365182   66841 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0829 20:27:10.365256   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0829 20:27:12.122371   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.757087653s)
	I0829 20:27:12.122409   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0829 20:27:12.122434   66841 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 20:27:12.122564   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 20:27:13.575108   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.45251018s)
	I0829 20:27:13.575137   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0829 20:27:13.575165   66841 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 20:27:13.575210   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 20:27:09.708364   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:11.708491   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:14.207383   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:13.494124   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:15.993564   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:14.249254   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:14.748957   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:15.249229   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:15.749137   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:16.248967   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:16.748254   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:17.248929   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:17.748339   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:18.248666   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:18.748712   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:15.742286   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.16705417s)
	I0829 20:27:15.742320   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0829 20:27:15.742348   66841 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0829 20:27:15.742398   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0829 20:27:16.391977   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0829 20:27:16.392017   66841 cache_images.go:123] Successfully loaded all cached images
	I0829 20:27:16.392022   66841 cache_images.go:92] duration metric: took 15.060512795s to LoadCachedImages
	I0829 20:27:16.392034   66841 kubeadm.go:934] updating node { 192.168.50.214 8443 v1.31.0 crio true true} ...
	I0829 20:27:16.392139   66841 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-397724 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-397724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:27:16.392203   66841 ssh_runner.go:195] Run: crio config
	I0829 20:27:16.445382   66841 cni.go:84] Creating CNI manager for ""
	I0829 20:27:16.445406   66841 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:27:16.445420   66841 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:27:16.445448   66841 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.214 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-397724 NodeName:no-preload-397724 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 20:27:16.445612   66841 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-397724"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:27:16.445671   66841 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 20:27:16.456505   66841 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:27:16.456560   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:27:16.467361   66841 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0829 20:27:16.484700   66841 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:27:16.503026   66841 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0829 20:27:16.519867   66841 ssh_runner.go:195] Run: grep 192.168.50.214	control-plane.minikube.internal$ /etc/hosts
	I0829 20:27:16.523648   66841 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:27:16.535642   66841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:27:16.671027   66841 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:27:16.688692   66841 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724 for IP: 192.168.50.214
	I0829 20:27:16.688712   66841 certs.go:194] generating shared ca certs ...
	I0829 20:27:16.688727   66841 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:27:16.688883   66841 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:27:16.688944   66841 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:27:16.688957   66841 certs.go:256] generating profile certs ...
	I0829 20:27:16.689053   66841 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/client.key
	I0829 20:27:16.689132   66841 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/apiserver.key.1f535ae9
	I0829 20:27:16.689182   66841 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/proxy-client.key
	I0829 20:27:16.689360   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:27:16.689400   66841 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:27:16.689415   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:27:16.689450   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:27:16.689504   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:27:16.689540   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:27:16.689596   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:27:16.690277   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:27:16.747582   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:27:16.782064   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:27:16.816382   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:27:16.851548   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0829 20:27:16.882919   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 20:27:16.907439   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:27:16.932392   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 20:27:16.957451   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:27:16.982482   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:27:17.006032   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:27:17.030052   66841 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:27:17.047792   66841 ssh_runner.go:195] Run: openssl version
	I0829 20:27:17.053922   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:27:17.065219   66841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:27:17.069592   66841 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:27:17.069647   66841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:27:17.075853   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:27:17.086727   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:27:17.097935   66841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:27:17.102198   66841 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:27:17.102252   66841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:27:17.108031   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:27:17.119868   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:27:17.131513   66841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:27:17.136434   66841 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:27:17.136497   66841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:27:17.142219   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:27:17.153448   66841 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:27:17.158375   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:27:17.165156   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:27:17.170927   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:27:17.176669   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:27:17.182293   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:27:17.187936   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:27:17.193572   66841 kubeadm.go:392] StartCluster: {Name:no-preload-397724 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-397724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:27:17.193682   66841 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:27:17.193754   66841 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:27:17.238327   66841 cri.go:89] found id: ""
	I0829 20:27:17.238392   66841 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:27:17.248923   66841 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:27:17.248943   66841 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:27:17.248984   66841 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:27:17.263143   66841 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:27:17.264260   66841 kubeconfig.go:125] found "no-preload-397724" server: "https://192.168.50.214:8443"
	I0829 20:27:17.266448   66841 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:27:17.276347   66841 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.214
	I0829 20:27:17.276378   66841 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:27:17.276389   66841 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:27:17.276440   66841 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:27:17.311409   66841 cri.go:89] found id: ""
	I0829 20:27:17.311476   66841 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:27:17.329204   66841 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:27:17.339063   66841 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:27:17.339079   66841 kubeadm.go:157] found existing configuration files:
	
	I0829 20:27:17.339118   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:27:17.348268   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:27:17.348324   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:27:17.357596   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:27:17.366504   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:27:17.366575   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:27:17.376068   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:27:17.385156   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:27:17.385220   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:27:17.394890   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:27:17.404213   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:27:17.404283   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:27:17.413669   66841 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:27:17.423307   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:17.536003   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:17.990605   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:18.217809   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:18.297100   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:18.421185   66841 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:27:18.421283   66841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:18.922043   66841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:16.209618   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:18.707544   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:17.993609   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:19.994469   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:19.248924   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:19.748958   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:20.248851   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:20.748547   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:21.248298   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:21.748802   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:22.248680   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:22.748271   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:23.248491   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:23.748803   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:19.422030   66841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:19.442023   66841 api_server.go:72] duration metric: took 1.020839747s to wait for apiserver process to appear ...
	I0829 20:27:19.442047   66841 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:27:19.442070   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:22.444156   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:27:22.444192   66841 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:27:22.444211   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:22.466228   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:27:22.466258   66841 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:27:22.942835   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:22.949338   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:27:22.949360   66841 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:27:23.443069   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:23.447845   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:27:23.447876   66841 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:27:23.942372   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:23.946517   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 200:
	ok
	I0829 20:27:23.953497   66841 api_server.go:141] control plane version: v1.31.0
	I0829 20:27:23.953522   66841 api_server.go:131] duration metric: took 4.511467637s to wait for apiserver health ...
	I0829 20:27:23.953530   66841 cni.go:84] Creating CNI manager for ""
	I0829 20:27:23.953536   66841 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:27:23.955180   66841 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:27:23.956396   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:27:23.969429   66841 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:27:24.000989   66841 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:27:24.014200   66841 system_pods.go:59] 8 kube-system pods found
	I0829 20:27:24.014233   66841 system_pods.go:61] "coredns-6f6b679f8f-g7xxs" [f0148527-2146-4153-aa20-5ac97b664027] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:27:24.014240   66841 system_pods.go:61] "etcd-no-preload-397724" [f04b5ee4-f439-470a-b298-1a9ed569db70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 20:27:24.014248   66841 system_pods.go:61] "kube-apiserver-no-preload-397724" [2328f327-1744-4785-9266-3f992b977ef8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 20:27:24.014254   66841 system_pods.go:61] "kube-controller-manager-no-preload-397724" [0e63f04d-8627-45e9-ac80-70a0fe63f5db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 20:27:24.014260   66841 system_pods.go:61] "kube-proxy-57kbt" [9f85ce17-85a0-4a52-bdaf-4e3aee4d1a98] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 20:27:24.014267   66841 system_pods.go:61] "kube-scheduler-no-preload-397724" [106821c6-2444-470a-bac1-78838c0b1982] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 20:27:24.014273   66841 system_pods.go:61] "metrics-server-6867b74b74-668dg" [e3f3ab24-7777-40b0-a54c-00a294e7e68e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:27:24.014280   66841 system_pods.go:61] "storage-provisioner" [146bd02a-8f50-4d19-a188-4adc2bcc0a43] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 20:27:24.014288   66841 system_pods.go:74] duration metric: took 13.275941ms to wait for pod list to return data ...
	I0829 20:27:24.014298   66841 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:27:24.018932   66841 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:27:24.018956   66841 node_conditions.go:123] node cpu capacity is 2
	I0829 20:27:24.018966   66841 node_conditions.go:105] duration metric: took 4.661993ms to run NodePressure ...
	I0829 20:27:24.018981   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:21.207144   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:23.208728   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:22.493988   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:24.494152   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:24.248456   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:24.748347   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:25.248337   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:25.748905   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:26.248912   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:26.749302   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:27.249058   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:27.749105   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:28.248548   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:28.748298   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:24.305237   66841 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 20:27:24.310640   66841 kubeadm.go:739] kubelet initialised
	I0829 20:27:24.310666   66841 kubeadm.go:740] duration metric: took 5.402212ms waiting for restarted kubelet to initialise ...
	I0829 20:27:24.310679   66841 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:27:24.316568   66841 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:26.325035   66841 pod_ready.go:103] pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:28.336627   66841 pod_ready.go:103] pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:25.706496   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:27.708228   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:26.992949   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:28.993682   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:30.993877   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:29.248994   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:29.749020   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:30.248983   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:30.748247   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:31.249052   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:31.249133   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:31.293442   67607 cri.go:89] found id: ""
	I0829 20:27:31.293466   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.293473   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:31.293479   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:31.293527   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:31.333976   67607 cri.go:89] found id: ""
	I0829 20:27:31.333999   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.334006   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:31.334011   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:31.334055   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:31.373680   67607 cri.go:89] found id: ""
	I0829 20:27:31.373707   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.373715   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:31.373720   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:31.373766   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:31.407798   67607 cri.go:89] found id: ""
	I0829 20:27:31.407824   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.407832   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:31.407837   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:31.407893   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:31.444409   67607 cri.go:89] found id: ""
	I0829 20:27:31.444437   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.444445   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:31.444451   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:31.444512   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:31.479313   67607 cri.go:89] found id: ""
	I0829 20:27:31.479333   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.479341   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:31.479347   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:31.479403   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:31.516056   67607 cri.go:89] found id: ""
	I0829 20:27:31.516089   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.516100   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:31.516108   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:31.516168   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:31.555324   67607 cri.go:89] found id: ""
	I0829 20:27:31.555349   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.555357   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:31.555365   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:31.555375   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:31.626397   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:31.626434   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:31.672006   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:31.672038   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:31.724691   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:31.724727   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:31.740283   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:31.740324   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:31.874007   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:29.824509   66841 pod_ready.go:93] pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:29.824530   66841 pod_ready.go:82] duration metric: took 5.507939145s for pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:29.824547   66841 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:31.833646   66841 pod_ready.go:103] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:30.207213   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:32.706352   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:32.993932   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:35.494511   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:34.374203   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:34.387817   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:34.387888   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:34.423254   67607 cri.go:89] found id: ""
	I0829 20:27:34.423279   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.423286   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:34.423296   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:34.423343   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:34.457741   67607 cri.go:89] found id: ""
	I0829 20:27:34.457768   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.457775   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:34.457781   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:34.457827   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:34.498432   67607 cri.go:89] found id: ""
	I0829 20:27:34.498457   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.498464   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:34.498469   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:34.498523   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:34.534290   67607 cri.go:89] found id: ""
	I0829 20:27:34.534317   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.534324   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:34.534330   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:34.534380   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:34.570878   67607 cri.go:89] found id: ""
	I0829 20:27:34.570909   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.570919   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:34.570928   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:34.570986   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:34.615735   67607 cri.go:89] found id: ""
	I0829 20:27:34.615762   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.615769   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:34.615775   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:34.615824   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:34.656667   67607 cri.go:89] found id: ""
	I0829 20:27:34.656706   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.656721   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:34.656730   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:34.656779   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:34.708906   67607 cri.go:89] found id: ""
	I0829 20:27:34.708928   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.708937   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:34.708947   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:34.708962   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:34.767382   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:34.767417   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:34.786523   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:34.786574   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:34.872832   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:34.872857   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:34.872871   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:34.954581   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:34.954620   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:37.497810   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:37.511479   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:37.511539   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:37.547930   67607 cri.go:89] found id: ""
	I0829 20:27:37.547962   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.547972   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:37.547980   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:37.548035   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:37.585281   67607 cri.go:89] found id: ""
	I0829 20:27:37.585304   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.585312   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:37.585318   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:37.585365   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:37.622201   67607 cri.go:89] found id: ""
	I0829 20:27:37.622229   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.622241   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:37.622246   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:37.622295   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:37.657248   67607 cri.go:89] found id: ""
	I0829 20:27:37.657274   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.657281   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:37.657289   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:37.657335   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:37.691674   67607 cri.go:89] found id: ""
	I0829 20:27:37.691703   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.691711   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:37.691716   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:37.691764   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:37.729523   67607 cri.go:89] found id: ""
	I0829 20:27:37.729548   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.729557   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:37.729562   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:37.729609   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:37.764601   67607 cri.go:89] found id: ""
	I0829 20:27:37.764629   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.764637   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:37.764643   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:37.764705   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:37.799228   67607 cri.go:89] found id: ""
	I0829 20:27:37.799259   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.799270   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:37.799281   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:37.799301   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:37.848128   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:37.848158   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:37.862610   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:37.862640   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:37.936859   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:37.936888   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:37.936903   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:38.013647   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:38.013681   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:34.331889   66841 pod_ready.go:103] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:36.332334   66841 pod_ready.go:103] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:37.329545   66841 pod_ready.go:93] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.329566   66841 pod_ready.go:82] duration metric: took 7.50501178s for pod "etcd-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.329576   66841 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.333442   66841 pod_ready.go:93] pod "kube-apiserver-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.333458   66841 pod_ready.go:82] duration metric: took 3.876755ms for pod "kube-apiserver-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.333467   66841 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.336952   66841 pod_ready.go:93] pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.336968   66841 pod_ready.go:82] duration metric: took 3.49531ms for pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.336976   66841 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-57kbt" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.340368   66841 pod_ready.go:93] pod "kube-proxy-57kbt" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.340383   66841 pod_ready.go:82] duration metric: took 3.401844ms for pod "kube-proxy-57kbt" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.340396   66841 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.344111   66841 pod_ready.go:93] pod "kube-scheduler-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.344125   66841 pod_ready.go:82] duration metric: took 3.723924ms for pod "kube-scheduler-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.344132   66841 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:34.708682   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:37.206876   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:37.997827   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:40.494840   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:40.551395   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:40.568100   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:40.568181   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:40.616582   67607 cri.go:89] found id: ""
	I0829 20:27:40.616611   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.616623   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:40.616631   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:40.616695   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:40.690580   67607 cri.go:89] found id: ""
	I0829 20:27:40.690620   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.690631   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:40.690638   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:40.690695   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:40.733624   67607 cri.go:89] found id: ""
	I0829 20:27:40.733653   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.733662   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:40.733670   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:40.733733   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:40.767499   67607 cri.go:89] found id: ""
	I0829 20:27:40.767528   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.767538   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:40.767546   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:40.767619   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:40.806973   67607 cri.go:89] found id: ""
	I0829 20:27:40.807002   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.807009   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:40.807015   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:40.807079   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:40.842311   67607 cri.go:89] found id: ""
	I0829 20:27:40.842334   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.842341   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:40.842347   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:40.842401   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:40.880208   67607 cri.go:89] found id: ""
	I0829 20:27:40.880238   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.880248   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:40.880255   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:40.880309   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:40.918395   67607 cri.go:89] found id: ""
	I0829 20:27:40.918424   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.918435   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:40.918445   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:40.918459   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:40.972396   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:40.972437   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:40.986136   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:40.986169   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:41.064600   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:41.064623   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:41.064634   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:41.146653   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:41.146687   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:43.687773   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:43.701576   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:43.701645   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:43.737259   67607 cri.go:89] found id: ""
	I0829 20:27:43.737282   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.737289   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:43.737299   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:43.737346   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:43.772678   67607 cri.go:89] found id: ""
	I0829 20:27:43.772702   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.772709   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:43.772714   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:43.772776   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:43.806788   67607 cri.go:89] found id: ""
	I0829 20:27:43.806821   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.806831   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:43.806839   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:43.806900   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:39.350484   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:41.352279   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:43.850564   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:39.707977   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:42.207630   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:42.993571   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:44.994696   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:43.841738   67607 cri.go:89] found id: ""
	I0829 20:27:43.841759   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.841767   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:43.841772   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:43.841829   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:43.878420   67607 cri.go:89] found id: ""
	I0829 20:27:43.878449   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.878459   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:43.878466   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:43.878527   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:43.914307   67607 cri.go:89] found id: ""
	I0829 20:27:43.914335   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.914345   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:43.914352   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:43.914413   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:43.958827   67607 cri.go:89] found id: ""
	I0829 20:27:43.958853   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.958865   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:43.958871   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:43.958935   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:43.997397   67607 cri.go:89] found id: ""
	I0829 20:27:43.997423   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.997432   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:43.997442   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:43.997455   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:44.049245   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:44.049280   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:44.063473   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:44.063511   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:44.131628   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:44.131651   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:44.131666   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:44.210826   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:44.210854   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:46.754905   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:46.769531   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:46.769588   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:46.805245   67607 cri.go:89] found id: ""
	I0829 20:27:46.805272   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.805280   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:46.805285   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:46.805338   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:46.843606   67607 cri.go:89] found id: ""
	I0829 20:27:46.843637   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.843646   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:46.843654   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:46.843710   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:46.880300   67607 cri.go:89] found id: ""
	I0829 20:27:46.880326   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.880333   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:46.880338   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:46.880387   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:46.923537   67607 cri.go:89] found id: ""
	I0829 20:27:46.923562   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.923569   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:46.923574   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:46.923620   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:46.957774   67607 cri.go:89] found id: ""
	I0829 20:27:46.957806   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.957817   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:46.957826   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:46.957887   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:46.996972   67607 cri.go:89] found id: ""
	I0829 20:27:46.996995   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.997005   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:46.997013   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:46.997056   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:47.030560   67607 cri.go:89] found id: ""
	I0829 20:27:47.030588   67607 logs.go:276] 0 containers: []
	W0829 20:27:47.030606   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:47.030612   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:47.030665   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:47.068654   67607 cri.go:89] found id: ""
	I0829 20:27:47.068678   67607 logs.go:276] 0 containers: []
	W0829 20:27:47.068686   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:47.068694   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:47.068706   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:47.082335   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:47.082367   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:47.162792   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:47.162817   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:47.162829   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:47.241456   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:47.241491   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:47.282249   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:47.282274   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:45.850673   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:47.850836   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:44.707198   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:46.707222   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:49.207556   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:46.995302   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:49.498812   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:49.836268   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:49.850415   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:49.850491   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:49.887816   67607 cri.go:89] found id: ""
	I0829 20:27:49.887843   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.887851   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:49.887856   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:49.887916   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:49.923701   67607 cri.go:89] found id: ""
	I0829 20:27:49.923735   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.923745   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:49.923755   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:49.923818   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:49.958197   67607 cri.go:89] found id: ""
	I0829 20:27:49.958225   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.958236   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:49.958244   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:49.958313   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:49.995333   67607 cri.go:89] found id: ""
	I0829 20:27:49.995361   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.995373   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:49.995380   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:49.995439   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:50.034345   67607 cri.go:89] found id: ""
	I0829 20:27:50.034375   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.034382   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:50.034387   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:50.034438   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:50.070324   67607 cri.go:89] found id: ""
	I0829 20:27:50.070355   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.070365   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:50.070374   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:50.070434   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:50.107301   67607 cri.go:89] found id: ""
	I0829 20:27:50.107326   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.107334   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:50.107340   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:50.107400   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:50.144748   67607 cri.go:89] found id: ""
	I0829 20:27:50.144778   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.144788   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:50.144800   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:50.144816   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:50.183576   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:50.183606   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:50.236716   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:50.236750   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:50.251589   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:50.251612   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:50.317816   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:50.317840   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:50.317855   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:52.894572   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:52.908081   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:52.908149   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:52.945272   67607 cri.go:89] found id: ""
	I0829 20:27:52.945299   67607 logs.go:276] 0 containers: []
	W0829 20:27:52.945309   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:52.945317   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:52.945377   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:52.980237   67607 cri.go:89] found id: ""
	I0829 20:27:52.980262   67607 logs.go:276] 0 containers: []
	W0829 20:27:52.980270   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:52.980275   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:52.980325   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:53.017894   67607 cri.go:89] found id: ""
	I0829 20:27:53.017922   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.017929   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:53.017935   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:53.017991   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:53.052577   67607 cri.go:89] found id: ""
	I0829 20:27:53.052603   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.052611   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:53.052616   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:53.052667   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:53.093414   67607 cri.go:89] found id: ""
	I0829 20:27:53.093444   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.093455   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:53.093462   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:53.093523   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:53.130794   67607 cri.go:89] found id: ""
	I0829 20:27:53.130825   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.130837   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:53.130845   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:53.130902   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:53.163793   67607 cri.go:89] found id: ""
	I0829 20:27:53.163819   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.163827   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:53.163832   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:53.163882   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:53.204824   67607 cri.go:89] found id: ""
	I0829 20:27:53.204852   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.204862   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:53.204872   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:53.204885   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:53.243411   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:53.243440   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:53.296611   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:53.296642   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:53.310909   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:53.310943   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:53.385768   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:53.385790   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:53.385801   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:49.851712   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:52.350295   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:51.711115   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:54.207340   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:51.993943   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:53.996334   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:56.494226   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:55.966801   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:55.980852   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:55.980933   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:56.017682   67607 cri.go:89] found id: ""
	I0829 20:27:56.017707   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.017716   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:56.017722   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:56.017767   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:56.051556   67607 cri.go:89] found id: ""
	I0829 20:27:56.051584   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.051594   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:56.051600   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:56.051665   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:56.095301   67607 cri.go:89] found id: ""
	I0829 20:27:56.095330   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.095340   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:56.095348   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:56.095408   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:56.131161   67607 cri.go:89] found id: ""
	I0829 20:27:56.131195   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.131205   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:56.131213   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:56.131269   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:56.166611   67607 cri.go:89] found id: ""
	I0829 20:27:56.166637   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.166645   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:56.166651   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:56.166713   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:56.202818   67607 cri.go:89] found id: ""
	I0829 20:27:56.202846   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.202856   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:56.202864   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:56.202923   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:56.237855   67607 cri.go:89] found id: ""
	I0829 20:27:56.237883   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.237891   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:56.237897   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:56.237955   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:56.272402   67607 cri.go:89] found id: ""
	I0829 20:27:56.272426   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.272433   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:56.272441   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:56.272452   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:56.351628   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:56.351653   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:56.389525   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:56.389559   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:56.444952   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:56.444989   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:56.459731   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:56.459759   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:56.536888   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:54.350358   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:56.350727   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:58.352884   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:56.208050   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:58.706897   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:58.993153   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:00.993544   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:59.037744   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:59.051868   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:59.051938   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:59.087436   67607 cri.go:89] found id: ""
	I0829 20:27:59.087461   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.087467   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:59.087474   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:59.087531   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:59.123729   67607 cri.go:89] found id: ""
	I0829 20:27:59.123757   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.123765   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:59.123771   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:59.123825   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:59.168649   67607 cri.go:89] found id: ""
	I0829 20:27:59.168682   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.168690   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:59.168696   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:59.168753   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:59.209770   67607 cri.go:89] found id: ""
	I0829 20:27:59.209791   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.209803   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:59.209808   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:59.209854   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:59.248358   67607 cri.go:89] found id: ""
	I0829 20:27:59.248384   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.248392   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:59.248398   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:59.248445   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:59.281770   67607 cri.go:89] found id: ""
	I0829 20:27:59.281797   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.281805   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:59.281811   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:59.281870   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:59.317255   67607 cri.go:89] found id: ""
	I0829 20:27:59.317285   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.317295   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:59.317302   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:59.317363   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:59.354301   67607 cri.go:89] found id: ""
	I0829 20:27:59.354324   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.354332   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:59.354339   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:59.354352   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:59.438346   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:59.438382   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:59.482482   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:59.482513   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:59.540926   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:59.540961   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:59.555221   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:59.555258   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:59.622114   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:02.123276   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:02.137435   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:02.137502   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:02.176310   67607 cri.go:89] found id: ""
	I0829 20:28:02.176340   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.176347   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:02.176355   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:02.176414   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:02.216511   67607 cri.go:89] found id: ""
	I0829 20:28:02.216555   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.216562   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:02.216574   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:02.216625   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:02.260116   67607 cri.go:89] found id: ""
	I0829 20:28:02.260149   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.260158   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:02.260164   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:02.260225   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:02.301550   67607 cri.go:89] found id: ""
	I0829 20:28:02.301584   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.301600   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:02.301608   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:02.301692   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:02.335916   67607 cri.go:89] found id: ""
	I0829 20:28:02.335948   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.335959   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:02.335967   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:02.336033   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:02.372479   67607 cri.go:89] found id: ""
	I0829 20:28:02.372507   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.372515   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:02.372522   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:02.372584   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:02.406683   67607 cri.go:89] found id: ""
	I0829 20:28:02.406713   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.406721   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:02.406727   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:02.406774   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:02.443130   67607 cri.go:89] found id: ""
	I0829 20:28:02.443156   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.443164   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:02.443173   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:02.443185   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:02.485747   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:02.485777   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:02.540106   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:02.540143   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:02.556158   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:02.556188   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:02.637870   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:02.637900   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:02.637915   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:00.851416   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:03.351248   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:00.707716   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:02.708204   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:02.994108   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:04.994988   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:05.220330   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:05.233932   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:05.233994   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:05.269046   67607 cri.go:89] found id: ""
	I0829 20:28:05.269072   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.269081   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:05.269087   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:05.269134   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:05.303963   67607 cri.go:89] found id: ""
	I0829 20:28:05.303989   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.303999   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:05.304006   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:05.304065   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:05.340943   67607 cri.go:89] found id: ""
	I0829 20:28:05.340975   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.340985   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:05.340992   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:05.341061   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:05.379551   67607 cri.go:89] found id: ""
	I0829 20:28:05.379582   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.379593   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:05.379601   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:05.379659   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:05.414229   67607 cri.go:89] found id: ""
	I0829 20:28:05.414256   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.414267   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:05.414274   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:05.414339   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:05.450212   67607 cri.go:89] found id: ""
	I0829 20:28:05.450241   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.450251   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:05.450258   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:05.450318   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:05.487415   67607 cri.go:89] found id: ""
	I0829 20:28:05.487451   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.487463   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:05.487470   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:05.487529   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:05.521347   67607 cri.go:89] found id: ""
	I0829 20:28:05.521370   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.521383   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:05.521390   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:05.521402   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:05.572317   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:05.572350   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:05.585651   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:05.585680   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:05.653929   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:05.653950   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:05.653969   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:05.732843   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:05.732873   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:08.281983   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:08.295104   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:08.295166   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:08.328570   67607 cri.go:89] found id: ""
	I0829 20:28:08.328596   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.328605   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:08.328613   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:08.328684   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:08.363567   67607 cri.go:89] found id: ""
	I0829 20:28:08.363595   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.363605   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:08.363613   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:08.363672   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:08.399619   67607 cri.go:89] found id: ""
	I0829 20:28:08.399645   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.399653   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:08.399659   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:08.399707   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:08.439252   67607 cri.go:89] found id: ""
	I0829 20:28:08.439283   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.439294   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:08.439301   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:08.439357   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:08.477730   67607 cri.go:89] found id: ""
	I0829 20:28:08.477754   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.477762   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:08.477768   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:08.477834   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:08.522045   67607 cri.go:89] found id: ""
	I0829 20:28:08.522066   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.522073   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:08.522079   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:08.522137   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:08.560400   67607 cri.go:89] found id: ""
	I0829 20:28:08.560427   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.560434   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:08.560441   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:08.560504   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:08.599111   67607 cri.go:89] found id: ""
	I0829 20:28:08.599140   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.599150   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:08.599161   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:08.599175   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:08.681451   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:08.681487   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:08.722800   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:08.722835   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:08.779058   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:08.779089   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:08.796940   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:08.796963   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 20:28:05.852245   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:08.351402   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:04.708669   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:07.207124   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:07.493431   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:09.493794   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	W0829 20:28:08.868296   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:11.369316   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:11.384150   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:11.384225   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:11.418452   67607 cri.go:89] found id: ""
	I0829 20:28:11.418480   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.418488   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:11.418494   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:11.418555   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:11.451359   67607 cri.go:89] found id: ""
	I0829 20:28:11.451389   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.451400   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:11.451408   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:11.451481   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:11.488408   67607 cri.go:89] found id: ""
	I0829 20:28:11.488436   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.488446   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:11.488453   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:11.488510   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:11.528311   67607 cri.go:89] found id: ""
	I0829 20:28:11.528340   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.528351   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:11.528359   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:11.528412   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:11.571345   67607 cri.go:89] found id: ""
	I0829 20:28:11.571372   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.571382   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:11.571389   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:11.571454   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:11.606812   67607 cri.go:89] found id: ""
	I0829 20:28:11.606839   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.606850   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:11.606857   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:11.606918   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:11.652687   67607 cri.go:89] found id: ""
	I0829 20:28:11.652710   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.652717   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:11.652722   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:11.652781   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:11.687583   67607 cri.go:89] found id: ""
	I0829 20:28:11.687628   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.687645   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:11.687655   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:11.687673   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:11.727052   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:11.727086   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:11.779116   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:11.779155   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:11.792911   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:11.792949   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:11.868415   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:11.868443   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:11.868461   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:10.850225   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:13.351638   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:09.707347   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:11.709556   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:14.206996   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:11.994187   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:14.494457   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:14.447886   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:14.462144   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:14.462221   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:14.499160   67607 cri.go:89] found id: ""
	I0829 20:28:14.499185   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.499193   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:14.499200   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:14.499258   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:14.545736   67607 cri.go:89] found id: ""
	I0829 20:28:14.545764   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.545774   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:14.545780   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:14.545844   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:14.583626   67607 cri.go:89] found id: ""
	I0829 20:28:14.583664   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.583674   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:14.583682   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:14.583744   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:14.619876   67607 cri.go:89] found id: ""
	I0829 20:28:14.619909   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.619917   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:14.619923   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:14.619975   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:14.655750   67607 cri.go:89] found id: ""
	I0829 20:28:14.655778   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.655786   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:14.655791   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:14.655848   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:14.690759   67607 cri.go:89] found id: ""
	I0829 20:28:14.690785   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.690795   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:14.690800   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:14.690850   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:14.727238   67607 cri.go:89] found id: ""
	I0829 20:28:14.727269   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.727282   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:14.727289   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:14.727344   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:14.765962   67607 cri.go:89] found id: ""
	I0829 20:28:14.765996   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.766006   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:14.766017   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:14.766033   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:14.835749   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:14.835779   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:14.835797   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:14.914075   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:14.914112   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:14.952684   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:14.952712   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:15.004598   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:15.004635   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:17.518949   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:17.532175   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:17.532250   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:17.569943   67607 cri.go:89] found id: ""
	I0829 20:28:17.569971   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.569979   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:17.569985   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:17.570044   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:17.605472   67607 cri.go:89] found id: ""
	I0829 20:28:17.605502   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.605510   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:17.605515   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:17.605566   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:17.641568   67607 cri.go:89] found id: ""
	I0829 20:28:17.641593   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.641603   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:17.641610   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:17.641669   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:17.680870   67607 cri.go:89] found id: ""
	I0829 20:28:17.680895   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.680905   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:17.680916   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:17.680981   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:17.723546   67607 cri.go:89] found id: ""
	I0829 20:28:17.723576   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.723587   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:17.723594   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:17.723659   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:17.757934   67607 cri.go:89] found id: ""
	I0829 20:28:17.757962   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.757973   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:17.757980   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:17.758028   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:17.792641   67607 cri.go:89] found id: ""
	I0829 20:28:17.792670   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.792679   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:17.792685   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:17.792738   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:17.830776   67607 cri.go:89] found id: ""
	I0829 20:28:17.830800   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.830807   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:17.830815   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:17.830825   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:17.886331   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:17.886377   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:17.900111   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:17.900135   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:17.969538   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:17.969563   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:17.969577   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:18.050609   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:18.050649   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:15.850497   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:17.851663   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:16.707415   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:19.207313   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:16.994325   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:19.494247   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:20.590686   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:20.605066   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:20.605121   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:20.646028   67607 cri.go:89] found id: ""
	I0829 20:28:20.646058   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.646074   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:20.646082   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:20.646143   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:20.683433   67607 cri.go:89] found id: ""
	I0829 20:28:20.683469   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.683479   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:20.683487   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:20.683567   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:20.722737   67607 cri.go:89] found id: ""
	I0829 20:28:20.722765   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.722775   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:20.722782   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:20.722841   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:20.759777   67607 cri.go:89] found id: ""
	I0829 20:28:20.759800   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.759807   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:20.759812   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:20.759864   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:20.799142   67607 cri.go:89] found id: ""
	I0829 20:28:20.799164   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.799170   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:20.799176   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:20.799223   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:20.838331   67607 cri.go:89] found id: ""
	I0829 20:28:20.838357   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.838365   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:20.838371   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:20.838427   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:20.878066   67607 cri.go:89] found id: ""
	I0829 20:28:20.878099   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.878110   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:20.878117   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:20.878175   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:20.928940   67607 cri.go:89] found id: ""
	I0829 20:28:20.928966   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.928975   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:20.928982   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:20.928993   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:20.984435   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:20.984471   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:21.005860   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:21.005900   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:21.084092   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:21.084123   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:21.084138   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:21.165971   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:21.166009   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:23.705033   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:23.718332   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:23.718390   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:23.753594   67607 cri.go:89] found id: ""
	I0829 20:28:23.753625   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.753635   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:23.753650   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:23.753715   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:23.791840   67607 cri.go:89] found id: ""
	I0829 20:28:23.791864   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.791872   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:23.791878   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:23.791930   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:20.350028   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:22.350487   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:21.207839   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:23.707197   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:21.993965   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:23.994879   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:26.493735   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:23.837815   67607 cri.go:89] found id: ""
	I0829 20:28:23.837839   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.837846   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:23.837851   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:23.837908   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:23.873155   67607 cri.go:89] found id: ""
	I0829 20:28:23.873184   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.873194   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:23.873201   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:23.873265   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:23.908728   67607 cri.go:89] found id: ""
	I0829 20:28:23.908757   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.908768   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:23.908774   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:23.908834   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:23.946286   67607 cri.go:89] found id: ""
	I0829 20:28:23.946310   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.946320   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:23.946328   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:23.946392   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:23.983078   67607 cri.go:89] found id: ""
	I0829 20:28:23.983105   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.983115   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:23.983129   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:23.983190   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:24.020601   67607 cri.go:89] found id: ""
	I0829 20:28:24.020634   67607 logs.go:276] 0 containers: []
	W0829 20:28:24.020644   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:24.020654   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:24.020669   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:24.034438   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:24.034463   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:24.103209   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:24.103230   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:24.103243   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:24.182977   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:24.183016   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:24.224743   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:24.224834   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:26.781507   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:26.794301   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:26.794387   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:26.827218   67607 cri.go:89] found id: ""
	I0829 20:28:26.827243   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.827250   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:26.827257   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:26.827303   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:26.862643   67607 cri.go:89] found id: ""
	I0829 20:28:26.862673   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.862685   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:26.862693   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:26.862743   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:26.898127   67607 cri.go:89] found id: ""
	I0829 20:28:26.898159   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.898169   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:26.898177   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:26.898237   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:26.932119   67607 cri.go:89] found id: ""
	I0829 20:28:26.932146   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.932167   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:26.932174   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:26.932241   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:26.966380   67607 cri.go:89] found id: ""
	I0829 20:28:26.966413   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.966421   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:26.966427   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:26.966478   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:27.004350   67607 cri.go:89] found id: ""
	I0829 20:28:27.004372   67607 logs.go:276] 0 containers: []
	W0829 20:28:27.004379   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:27.004386   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:27.004436   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:27.041171   67607 cri.go:89] found id: ""
	I0829 20:28:27.041199   67607 logs.go:276] 0 containers: []
	W0829 20:28:27.041206   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:27.041212   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:27.041257   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:27.073993   67607 cri.go:89] found id: ""
	I0829 20:28:27.074031   67607 logs.go:276] 0 containers: []
	W0829 20:28:27.074041   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:27.074053   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:27.074066   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:27.148169   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:27.148199   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:27.148214   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:27.227174   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:27.227212   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:27.267180   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:27.267230   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:27.319034   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:27.319066   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:24.350754   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:26.850582   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:26.207974   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:28.707820   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:28.494090   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:30.994157   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:29.833497   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:29.846883   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:29.846951   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:29.884133   67607 cri.go:89] found id: ""
	I0829 20:28:29.884163   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.884175   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:29.884182   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:29.884247   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:29.917594   67607 cri.go:89] found id: ""
	I0829 20:28:29.917618   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.917628   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:29.917636   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:29.917696   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:29.952537   67607 cri.go:89] found id: ""
	I0829 20:28:29.952568   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.952576   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:29.952582   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:29.952630   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:29.988410   67607 cri.go:89] found id: ""
	I0829 20:28:29.988441   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.988448   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:29.988454   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:29.988511   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:30.026761   67607 cri.go:89] found id: ""
	I0829 20:28:30.026788   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.026796   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:30.026802   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:30.026861   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:30.063010   67607 cri.go:89] found id: ""
	I0829 20:28:30.063037   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.063046   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:30.063054   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:30.063109   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:30.098067   67607 cri.go:89] found id: ""
	I0829 20:28:30.098093   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.098101   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:30.098107   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:30.098161   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:30.132887   67607 cri.go:89] found id: ""
	I0829 20:28:30.132914   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.132921   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:30.132928   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:30.132940   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:30.184955   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:30.184990   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:30.198966   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:30.199004   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:30.268950   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:30.268977   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:30.268991   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:30.354222   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:30.354260   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:32.896554   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:32.911188   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:32.911271   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:32.945726   67607 cri.go:89] found id: ""
	I0829 20:28:32.945750   67607 logs.go:276] 0 containers: []
	W0829 20:28:32.945758   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:32.945773   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:32.945829   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:32.980234   67607 cri.go:89] found id: ""
	I0829 20:28:32.980267   67607 logs.go:276] 0 containers: []
	W0829 20:28:32.980275   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:32.980281   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:32.980329   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:33.019031   67607 cri.go:89] found id: ""
	I0829 20:28:33.019063   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.019071   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:33.019076   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:33.019126   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:33.056290   67607 cri.go:89] found id: ""
	I0829 20:28:33.056314   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.056322   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:33.056327   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:33.056391   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:33.090038   67607 cri.go:89] found id: ""
	I0829 20:28:33.090068   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.090078   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:33.090086   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:33.090152   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:33.125742   67607 cri.go:89] found id: ""
	I0829 20:28:33.125774   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.125782   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:33.125787   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:33.125849   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:33.159019   67607 cri.go:89] found id: ""
	I0829 20:28:33.159047   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.159058   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:33.159065   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:33.159125   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:33.197900   67607 cri.go:89] found id: ""
	I0829 20:28:33.197925   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.197933   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:33.197941   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:33.197955   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:33.250010   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:33.250040   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:33.263348   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:33.263374   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:33.342037   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:33.342065   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:33.342082   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:33.423324   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:33.423361   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:29.350275   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:31.350994   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:33.850866   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:30.713472   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:33.207271   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:32.995169   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:35.493980   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:35.963734   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:35.978648   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:35.978713   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:36.015326   67607 cri.go:89] found id: ""
	I0829 20:28:36.015350   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.015358   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:36.015364   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:36.015411   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:36.050840   67607 cri.go:89] found id: ""
	I0829 20:28:36.050869   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.050879   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:36.050886   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:36.050947   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:36.084048   67607 cri.go:89] found id: ""
	I0829 20:28:36.084076   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.084084   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:36.084090   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:36.084138   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:36.118655   67607 cri.go:89] found id: ""
	I0829 20:28:36.118682   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.118693   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:36.118702   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:36.118762   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:36.153879   67607 cri.go:89] found id: ""
	I0829 20:28:36.153908   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.153918   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:36.153926   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:36.153988   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:36.199834   67607 cri.go:89] found id: ""
	I0829 20:28:36.199858   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.199866   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:36.199872   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:36.199927   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:36.238098   67607 cri.go:89] found id: ""
	I0829 20:28:36.238129   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.238139   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:36.238146   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:36.238208   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:36.272091   67607 cri.go:89] found id: ""
	I0829 20:28:36.272124   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.272135   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:36.272146   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:36.272162   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:36.338478   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:36.338498   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:36.338510   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:36.418637   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:36.418671   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:36.458167   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:36.458194   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:36.508592   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:36.508630   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:36.351066   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:38.849684   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:35.706813   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:37.708058   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:38.003178   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:40.493065   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:39.022668   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:39.035897   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:39.035971   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:39.071155   67607 cri.go:89] found id: ""
	I0829 20:28:39.071185   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.071196   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:39.071203   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:39.071258   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:39.104135   67607 cri.go:89] found id: ""
	I0829 20:28:39.104177   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.104188   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:39.104206   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:39.104266   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:39.138301   67607 cri.go:89] found id: ""
	I0829 20:28:39.138329   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.138339   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:39.138346   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:39.138404   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:39.172674   67607 cri.go:89] found id: ""
	I0829 20:28:39.172700   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.172708   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:39.172719   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:39.172779   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:39.209810   67607 cri.go:89] found id: ""
	I0829 20:28:39.209836   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.209845   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:39.209852   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:39.209915   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:39.248692   67607 cri.go:89] found id: ""
	I0829 20:28:39.248715   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.248722   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:39.248728   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:39.248798   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:39.284303   67607 cri.go:89] found id: ""
	I0829 20:28:39.284333   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.284343   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:39.284351   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:39.284401   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:39.321346   67607 cri.go:89] found id: ""
	I0829 20:28:39.321375   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.321386   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:39.321396   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:39.321410   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:39.334678   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:39.334710   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:39.421992   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:39.422014   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:39.422027   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:39.503250   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:39.503280   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:39.540623   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:39.540654   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:42.092131   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:42.105440   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:42.105498   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:42.140994   67607 cri.go:89] found id: ""
	I0829 20:28:42.141024   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.141034   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:42.141042   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:42.141102   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:42.175182   67607 cri.go:89] found id: ""
	I0829 20:28:42.175217   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.175228   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:42.175248   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:42.175319   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:42.209251   67607 cri.go:89] found id: ""
	I0829 20:28:42.209281   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.209291   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:42.209299   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:42.209362   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:42.247944   67607 cri.go:89] found id: ""
	I0829 20:28:42.247970   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.247977   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:42.247983   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:42.248028   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:42.285613   67607 cri.go:89] found id: ""
	I0829 20:28:42.285644   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.285651   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:42.285657   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:42.285722   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:42.319826   67607 cri.go:89] found id: ""
	I0829 20:28:42.319851   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.319858   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:42.319864   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:42.319928   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:42.357150   67607 cri.go:89] found id: ""
	I0829 20:28:42.357173   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.357182   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:42.357189   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:42.357243   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:42.392150   67607 cri.go:89] found id: ""
	I0829 20:28:42.392170   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.392178   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:42.392185   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:42.392197   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:42.469240   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:42.469271   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:42.469286   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:42.549165   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:42.549198   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:42.591900   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:42.591930   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:42.642593   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:42.642625   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:40.851544   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:43.350420   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:39.708341   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:42.206888   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:44.207934   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:42.494791   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:44.992992   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:45.157092   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:45.170832   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:45.170916   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:45.207210   67607 cri.go:89] found id: ""
	I0829 20:28:45.207235   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.207244   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:45.207251   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:45.207308   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:45.245321   67607 cri.go:89] found id: ""
	I0829 20:28:45.245352   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.245362   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:45.245379   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:45.245448   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:45.280326   67607 cri.go:89] found id: ""
	I0829 20:28:45.280369   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.280381   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:45.280389   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:45.280451   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:45.318294   67607 cri.go:89] found id: ""
	I0829 20:28:45.318322   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.318333   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:45.318340   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:45.318411   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:45.352903   67607 cri.go:89] found id: ""
	I0829 20:28:45.352925   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.352932   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:45.352938   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:45.352990   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:45.389251   67607 cri.go:89] found id: ""
	I0829 20:28:45.389273   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.389280   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:45.389286   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:45.389340   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:45.424348   67607 cri.go:89] found id: ""
	I0829 20:28:45.424385   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.424397   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:45.424404   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:45.424453   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:45.459058   67607 cri.go:89] found id: ""
	I0829 20:28:45.459087   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.459098   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:45.459109   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:45.459124   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:45.510386   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:45.510423   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:45.524896   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:45.524923   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:45.593987   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:45.594064   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:45.594082   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:45.668738   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:45.668771   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:48.206497   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:48.219625   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:48.219696   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:48.254936   67607 cri.go:89] found id: ""
	I0829 20:28:48.254959   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.254966   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:48.254971   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:48.255018   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:48.290826   67607 cri.go:89] found id: ""
	I0829 20:28:48.290851   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.290859   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:48.290864   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:48.290910   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:48.327508   67607 cri.go:89] found id: ""
	I0829 20:28:48.327533   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.327540   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:48.327546   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:48.327593   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:48.364492   67607 cri.go:89] found id: ""
	I0829 20:28:48.364517   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.364525   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:48.364530   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:48.364580   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:48.400035   67607 cri.go:89] found id: ""
	I0829 20:28:48.400062   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.400072   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:48.400079   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:48.400144   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:48.433999   67607 cri.go:89] found id: ""
	I0829 20:28:48.434026   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.434035   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:48.434043   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:48.434104   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:48.468841   67607 cri.go:89] found id: ""
	I0829 20:28:48.468873   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.468889   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:48.468903   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:48.468971   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:48.506557   67607 cri.go:89] found id: ""
	I0829 20:28:48.506589   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.506598   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:48.506609   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:48.506624   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:48.577023   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:48.577044   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:48.577056   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:48.654372   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:48.654407   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:48.691125   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:48.691152   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:48.746383   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:48.746414   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:45.350581   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:47.351437   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:46.705575   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:48.707018   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:46.993532   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:48.994284   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:51.494177   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:51.260591   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:51.273911   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:51.273974   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:51.311517   67607 cri.go:89] found id: ""
	I0829 20:28:51.311545   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.311553   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:51.311567   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:51.311616   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:51.348220   67607 cri.go:89] found id: ""
	I0829 20:28:51.348247   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.348256   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:51.348264   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:51.348321   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:51.383560   67607 cri.go:89] found id: ""
	I0829 20:28:51.383599   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.383611   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:51.383619   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:51.383680   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:51.419241   67607 cri.go:89] found id: ""
	I0829 20:28:51.419268   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.419278   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:51.419286   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:51.419343   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:51.453954   67607 cri.go:89] found id: ""
	I0829 20:28:51.453979   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.453986   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:51.453992   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:51.454047   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:51.489457   67607 cri.go:89] found id: ""
	I0829 20:28:51.489480   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.489488   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:51.489493   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:51.489544   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:51.524072   67607 cri.go:89] found id: ""
	I0829 20:28:51.524100   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.524107   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:51.524113   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:51.524160   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:51.561238   67607 cri.go:89] found id: ""
	I0829 20:28:51.561263   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.561271   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:51.561279   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:51.561290   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:51.615422   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:51.615462   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:51.632180   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:51.632216   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:51.704335   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:51.704363   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:51.704378   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:51.794219   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:51.794260   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:49.852140   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:52.351142   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:51.205903   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:53.207651   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:53.495412   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:55.993489   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:54.342556   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:54.356325   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:54.356400   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:54.390928   67607 cri.go:89] found id: ""
	I0829 20:28:54.390952   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.390959   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:54.390965   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:54.391011   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:54.426970   67607 cri.go:89] found id: ""
	I0829 20:28:54.427002   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.427013   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:54.427020   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:54.427074   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:54.464121   67607 cri.go:89] found id: ""
	I0829 20:28:54.464155   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.464166   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:54.464174   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:54.464236   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:54.499790   67607 cri.go:89] found id: ""
	I0829 20:28:54.499816   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.499827   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:54.499840   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:54.499889   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:54.537212   67607 cri.go:89] found id: ""
	I0829 20:28:54.537239   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.537249   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:54.537256   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:54.537314   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:54.575370   67607 cri.go:89] found id: ""
	I0829 20:28:54.575399   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.575410   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:54.575417   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:54.575469   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:54.608403   67607 cri.go:89] found id: ""
	I0829 20:28:54.608432   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.608443   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:54.608453   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:54.608514   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:54.645259   67607 cri.go:89] found id: ""
	I0829 20:28:54.645285   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.645292   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:54.645300   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:54.645311   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:54.697022   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:54.697063   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:54.712873   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:54.712914   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:54.814253   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:54.814278   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:54.814295   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:54.896473   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:54.896507   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:57.441648   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:57.455245   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:57.455321   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:57.495365   67607 cri.go:89] found id: ""
	I0829 20:28:57.495397   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.495405   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:57.495411   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:57.495472   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:57.529555   67607 cri.go:89] found id: ""
	I0829 20:28:57.529582   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.529590   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:57.529597   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:57.529667   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:57.564168   67607 cri.go:89] found id: ""
	I0829 20:28:57.564196   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.564208   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:57.564215   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:57.564277   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:57.602057   67607 cri.go:89] found id: ""
	I0829 20:28:57.602089   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.602100   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:57.602108   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:57.602194   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:57.638195   67607 cri.go:89] found id: ""
	I0829 20:28:57.638226   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.638235   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:57.638244   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:57.638307   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:57.674556   67607 cri.go:89] found id: ""
	I0829 20:28:57.674605   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.674615   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:57.674623   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:57.674680   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:57.709256   67607 cri.go:89] found id: ""
	I0829 20:28:57.709282   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.709291   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:57.709298   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:57.709358   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:57.743629   67607 cri.go:89] found id: ""
	I0829 20:28:57.743652   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.743659   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:57.743668   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:57.743679   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:57.789067   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:57.789098   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:57.843372   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:57.843403   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:57.858630   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:57.858661   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:57.927776   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:57.927798   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:57.927814   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:54.850906   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:56.851300   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:55.208638   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:57.707756   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:57.994287   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:00.493343   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:00.508180   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:00.521451   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:00.521529   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:00.557912   67607 cri.go:89] found id: ""
	I0829 20:29:00.557938   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.557945   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:00.557951   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:00.557997   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:00.595186   67607 cri.go:89] found id: ""
	I0829 20:29:00.595215   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.595226   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:00.595237   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:00.595299   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:00.631553   67607 cri.go:89] found id: ""
	I0829 20:29:00.631581   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.631592   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:00.631600   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:00.631660   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:00.666502   67607 cri.go:89] found id: ""
	I0829 20:29:00.666525   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.666551   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:00.666560   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:00.666621   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:00.700797   67607 cri.go:89] found id: ""
	I0829 20:29:00.700824   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.700835   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:00.700842   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:00.700908   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:00.739957   67607 cri.go:89] found id: ""
	I0829 20:29:00.739976   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.739989   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:00.739994   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:00.740035   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:00.800704   67607 cri.go:89] found id: ""
	I0829 20:29:00.800740   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.800750   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:00.800757   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:00.800820   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:00.837678   67607 cri.go:89] found id: ""
	I0829 20:29:00.837704   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.837712   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:00.837720   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:00.837731   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:00.888359   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:00.888391   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:00.903074   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:00.903103   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:00.964865   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:00.964885   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:00.964898   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:01.049351   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:01.049387   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:03.589829   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:03.603120   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:03.603192   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:03.637647   67607 cri.go:89] found id: ""
	I0829 20:29:03.637672   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.637678   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:03.637684   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:03.637732   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:03.673807   67607 cri.go:89] found id: ""
	I0829 20:29:03.673842   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.673852   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:03.673860   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:03.673918   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:03.709490   67607 cri.go:89] found id: ""
	I0829 20:29:03.709516   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.709527   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:03.709533   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:03.709595   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:03.751662   67607 cri.go:89] found id: ""
	I0829 20:29:03.751688   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.751696   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:03.751702   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:03.751751   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:03.787861   67607 cri.go:89] found id: ""
	I0829 20:29:03.787896   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.787908   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:03.787917   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:03.787977   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:59.350888   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:01.850615   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:03.851438   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:00.207912   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:02.707309   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:02.493506   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:04.494305   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:03.824383   67607 cri.go:89] found id: ""
	I0829 20:29:03.824413   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.824431   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:03.824438   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:03.824499   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:03.863904   67607 cri.go:89] found id: ""
	I0829 20:29:03.863929   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.863937   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:03.863943   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:03.863990   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:03.902336   67607 cri.go:89] found id: ""
	I0829 20:29:03.902360   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.902368   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:03.902375   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:03.902386   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:03.951468   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:03.951499   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:03.965789   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:03.965816   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:04.035096   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:04.035119   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:04.035193   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:04.115842   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:04.115876   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:06.662652   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:06.676508   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:06.676583   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:06.713058   67607 cri.go:89] found id: ""
	I0829 20:29:06.713084   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.713093   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:06.713101   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:06.713171   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:06.747513   67607 cri.go:89] found id: ""
	I0829 20:29:06.747544   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.747552   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:06.747557   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:06.747617   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:06.782662   67607 cri.go:89] found id: ""
	I0829 20:29:06.782689   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.782695   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:06.782701   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:06.782758   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:06.818472   67607 cri.go:89] found id: ""
	I0829 20:29:06.818500   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.818510   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:06.818516   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:06.818586   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:06.852928   67607 cri.go:89] found id: ""
	I0829 20:29:06.852954   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.852964   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:06.852974   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:06.853032   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:06.893859   67607 cri.go:89] found id: ""
	I0829 20:29:06.893889   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.893899   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:06.893907   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:06.893969   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:06.931552   67607 cri.go:89] found id: ""
	I0829 20:29:06.931584   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.931594   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:06.931601   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:06.931662   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:06.967210   67607 cri.go:89] found id: ""
	I0829 20:29:06.967243   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.967254   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:06.967266   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:06.967279   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:07.020595   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:07.020631   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:07.034738   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:07.034764   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:07.103726   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:07.103747   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:07.103760   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:07.184727   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:07.184764   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:06.350610   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:08.351571   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:05.207055   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:07.207650   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:06.994653   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:09.493932   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:09.746639   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:09.761228   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:09.761308   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:09.802071   67607 cri.go:89] found id: ""
	I0829 20:29:09.802102   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.802113   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:09.802122   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:09.802180   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:09.837352   67607 cri.go:89] found id: ""
	I0829 20:29:09.837385   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.837395   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:09.837402   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:09.837464   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:09.874951   67607 cri.go:89] found id: ""
	I0829 20:29:09.874980   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.874992   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:09.874999   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:09.875055   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:09.909660   67607 cri.go:89] found id: ""
	I0829 20:29:09.909696   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.909706   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:09.909713   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:09.909777   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:09.949727   67607 cri.go:89] found id: ""
	I0829 20:29:09.949751   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.949759   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:09.949765   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:09.949825   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:09.984576   67607 cri.go:89] found id: ""
	I0829 20:29:09.984609   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.984617   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:09.984623   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:09.984675   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:10.022499   67607 cri.go:89] found id: ""
	I0829 20:29:10.022523   67607 logs.go:276] 0 containers: []
	W0829 20:29:10.022530   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:10.022553   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:10.022624   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:10.064308   67607 cri.go:89] found id: ""
	I0829 20:29:10.064346   67607 logs.go:276] 0 containers: []
	W0829 20:29:10.064356   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:10.064367   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:10.064382   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:10.113505   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:10.113537   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:10.127614   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:10.127640   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:10.200558   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:10.200579   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:10.200592   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:10.292984   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:10.293020   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:12.833100   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:12.846645   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:12.846712   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:12.885396   67607 cri.go:89] found id: ""
	I0829 20:29:12.885423   67607 logs.go:276] 0 containers: []
	W0829 20:29:12.885430   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:12.885436   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:12.885486   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:12.922556   67607 cri.go:89] found id: ""
	I0829 20:29:12.922584   67607 logs.go:276] 0 containers: []
	W0829 20:29:12.922595   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:12.922602   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:12.922688   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:12.965294   67607 cri.go:89] found id: ""
	I0829 20:29:12.965324   67607 logs.go:276] 0 containers: []
	W0829 20:29:12.965335   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:12.965342   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:12.965401   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:13.022911   67607 cri.go:89] found id: ""
	I0829 20:29:13.022934   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.022942   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:13.022948   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:13.023009   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:13.077009   67607 cri.go:89] found id: ""
	I0829 20:29:13.077035   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.077043   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:13.077048   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:13.077095   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:13.114202   67607 cri.go:89] found id: ""
	I0829 20:29:13.114233   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.114243   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:13.114251   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:13.114315   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:13.147025   67607 cri.go:89] found id: ""
	I0829 20:29:13.147049   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.147057   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:13.147063   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:13.147110   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:13.183112   67607 cri.go:89] found id: ""
	I0829 20:29:13.183138   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.183148   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:13.183159   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:13.183173   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:13.240558   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:13.240595   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:13.255563   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:13.255589   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:13.322826   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:13.322846   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:13.322857   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:13.399330   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:13.399365   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:10.850650   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:12.852188   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:09.706791   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:11.707397   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:13.708663   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:11.993311   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:13.994310   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:16.494854   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:15.938467   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:15.951742   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:15.951812   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:15.987492   67607 cri.go:89] found id: ""
	I0829 20:29:15.987517   67607 logs.go:276] 0 containers: []
	W0829 20:29:15.987524   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:15.987530   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:15.987575   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:16.024187   67607 cri.go:89] found id: ""
	I0829 20:29:16.024214   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.024223   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:16.024231   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:16.024291   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:16.058141   67607 cri.go:89] found id: ""
	I0829 20:29:16.058164   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.058171   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:16.058176   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:16.058225   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:16.092390   67607 cri.go:89] found id: ""
	I0829 20:29:16.092414   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.092421   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:16.092427   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:16.092472   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:16.130178   67607 cri.go:89] found id: ""
	I0829 20:29:16.130209   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.130219   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:16.130227   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:16.130289   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:16.163867   67607 cri.go:89] found id: ""
	I0829 20:29:16.163900   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.163907   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:16.163913   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:16.163964   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:16.197764   67607 cri.go:89] found id: ""
	I0829 20:29:16.197792   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.197798   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:16.197804   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:16.197850   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:16.233357   67607 cri.go:89] found id: ""
	I0829 20:29:16.233383   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.233393   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:16.233403   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:16.233418   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:16.285154   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:16.285188   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:16.299057   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:16.299085   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:16.377021   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:16.377041   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:16.377062   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:16.457750   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:16.457796   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:15.350415   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:17.850927   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:16.206841   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:18.207273   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:18.993478   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:21.493806   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:18.999133   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:19.016143   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:19.016223   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:19.049225   67607 cri.go:89] found id: ""
	I0829 20:29:19.049252   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.049259   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:19.049265   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:19.049317   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:19.085237   67607 cri.go:89] found id: ""
	I0829 20:29:19.085297   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.085314   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:19.085325   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:19.085389   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:19.123476   67607 cri.go:89] found id: ""
	I0829 20:29:19.123501   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.123509   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:19.123514   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:19.123571   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:19.159958   67607 cri.go:89] found id: ""
	I0829 20:29:19.159984   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.159993   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:19.160001   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:19.160055   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:19.192385   67607 cri.go:89] found id: ""
	I0829 20:29:19.192410   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.192418   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:19.192423   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:19.192483   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:19.230781   67607 cri.go:89] found id: ""
	I0829 20:29:19.230804   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.230811   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:19.230816   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:19.230868   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:19.264925   67607 cri.go:89] found id: ""
	I0829 20:29:19.264954   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.264964   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:19.264972   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:19.265032   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:19.302461   67607 cri.go:89] found id: ""
	I0829 20:29:19.302484   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.302491   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:19.302499   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:19.302510   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:19.384799   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:19.384833   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:19.425281   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:19.425313   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:19.477380   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:19.477412   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:19.492315   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:19.492350   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:19.563428   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:22.064407   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:22.078609   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:22.078670   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:22.112630   67607 cri.go:89] found id: ""
	I0829 20:29:22.112662   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.112672   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:22.112680   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:22.112741   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:22.149078   67607 cri.go:89] found id: ""
	I0829 20:29:22.149108   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.149117   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:22.149124   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:22.149186   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:22.184568   67607 cri.go:89] found id: ""
	I0829 20:29:22.184596   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.184605   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:22.184613   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:22.184682   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:22.220881   67607 cri.go:89] found id: ""
	I0829 20:29:22.220908   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.220919   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:22.220926   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:22.220987   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:22.256280   67607 cri.go:89] found id: ""
	I0829 20:29:22.256305   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.256314   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:22.256321   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:22.256386   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:22.294546   67607 cri.go:89] found id: ""
	I0829 20:29:22.294580   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.294590   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:22.294597   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:22.294660   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:22.332178   67607 cri.go:89] found id: ""
	I0829 20:29:22.332207   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.332215   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:22.332220   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:22.332266   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:22.368283   67607 cri.go:89] found id: ""
	I0829 20:29:22.368309   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.368317   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:22.368325   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:22.368336   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:22.421800   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:22.421836   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:22.435539   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:22.435565   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:22.504402   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:22.504427   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:22.504441   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:22.588293   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:22.588326   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:19.851801   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:22.351929   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:20.207342   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:22.707546   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:23.493994   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:25.993337   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:25.130766   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:25.144479   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:25.144554   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:25.181606   67607 cri.go:89] found id: ""
	I0829 20:29:25.181636   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.181643   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:25.181649   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:25.181697   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:25.220291   67607 cri.go:89] found id: ""
	I0829 20:29:25.220320   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.220328   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:25.220335   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:25.220447   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:25.260947   67607 cri.go:89] found id: ""
	I0829 20:29:25.260975   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.260983   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:25.260988   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:25.261035   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:25.298200   67607 cri.go:89] found id: ""
	I0829 20:29:25.298232   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.298243   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:25.298256   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:25.298314   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:25.333128   67607 cri.go:89] found id: ""
	I0829 20:29:25.333162   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.333174   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:25.333181   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:25.333232   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:25.368951   67607 cri.go:89] found id: ""
	I0829 20:29:25.368979   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.368989   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:25.368997   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:25.369052   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:25.403687   67607 cri.go:89] found id: ""
	I0829 20:29:25.403715   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.403726   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:25.403734   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:25.403799   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:25.442338   67607 cri.go:89] found id: ""
	I0829 20:29:25.442365   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.442372   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:25.442381   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:25.442395   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:25.456313   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:25.456335   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:25.528709   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:25.528730   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:25.528744   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:25.609976   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:25.610011   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:25.650044   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:25.650071   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:28.202683   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:28.216971   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:28.217046   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:28.256297   67607 cri.go:89] found id: ""
	I0829 20:29:28.256321   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.256329   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:28.256335   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:28.256379   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:28.289396   67607 cri.go:89] found id: ""
	I0829 20:29:28.289420   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.289427   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:28.289433   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:28.289484   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:28.323589   67607 cri.go:89] found id: ""
	I0829 20:29:28.323616   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.323623   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:28.323630   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:28.323676   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:28.362423   67607 cri.go:89] found id: ""
	I0829 20:29:28.362453   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.362463   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:28.362471   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:28.362531   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:28.396967   67607 cri.go:89] found id: ""
	I0829 20:29:28.396990   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.396998   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:28.397003   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:28.397053   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:28.430714   67607 cri.go:89] found id: ""
	I0829 20:29:28.430744   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.430755   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:28.430762   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:28.430831   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:28.468668   67607 cri.go:89] found id: ""
	I0829 20:29:28.468696   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.468707   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:28.468714   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:28.468777   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:28.506678   67607 cri.go:89] found id: ""
	I0829 20:29:28.506705   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.506716   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:28.506727   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:28.506741   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:28.545259   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:28.545287   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:28.598249   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:28.598285   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:28.612385   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:28.612429   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:28.685765   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:28.685792   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:28.685806   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:24.851688   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:27.350456   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:24.708523   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:27.206094   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:29.207859   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:27.995492   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:30.494340   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:31.270074   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:31.284357   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:31.284417   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:31.319530   67607 cri.go:89] found id: ""
	I0829 20:29:31.319558   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.319566   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:31.319571   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:31.319640   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:31.356826   67607 cri.go:89] found id: ""
	I0829 20:29:31.356856   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.356867   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:31.356880   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:31.356934   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:31.390137   67607 cri.go:89] found id: ""
	I0829 20:29:31.390160   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.390167   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:31.390173   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:31.390219   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:31.424939   67607 cri.go:89] found id: ""
	I0829 20:29:31.424972   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.424989   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:31.424997   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:31.425054   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:31.460896   67607 cri.go:89] found id: ""
	I0829 20:29:31.460921   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.460928   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:31.460935   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:31.460985   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:31.498933   67607 cri.go:89] found id: ""
	I0829 20:29:31.498957   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.498967   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:31.498975   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:31.499044   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:31.534953   67607 cri.go:89] found id: ""
	I0829 20:29:31.534985   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.534996   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:31.535003   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:31.535065   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:31.576248   67607 cri.go:89] found id: ""
	I0829 20:29:31.576273   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.576281   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:31.576291   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:31.576307   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:31.628157   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:31.628196   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:31.641564   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:31.641591   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:31.719949   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:31.719973   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:31.719996   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:31.795682   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:31.795716   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:29.351248   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:31.351424   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:33.851397   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:31.707552   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:34.207468   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:32.993432   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:34.993634   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:34.333468   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:34.347294   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:34.347370   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:34.384885   67607 cri.go:89] found id: ""
	I0829 20:29:34.384910   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.384921   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:34.384928   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:34.384991   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:34.422309   67607 cri.go:89] found id: ""
	I0829 20:29:34.422341   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.422351   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:34.422358   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:34.422417   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:34.459800   67607 cri.go:89] found id: ""
	I0829 20:29:34.459826   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.459834   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:34.459840   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:34.459905   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:34.495600   67607 cri.go:89] found id: ""
	I0829 20:29:34.495624   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.495633   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:34.495647   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:34.495708   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:34.531749   67607 cri.go:89] found id: ""
	I0829 20:29:34.531777   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.531788   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:34.531795   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:34.531856   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:34.571057   67607 cri.go:89] found id: ""
	I0829 20:29:34.571088   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.571098   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:34.571105   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:34.571168   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:34.609645   67607 cri.go:89] found id: ""
	I0829 20:29:34.609676   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.609687   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:34.609695   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:34.609753   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:34.647199   67607 cri.go:89] found id: ""
	I0829 20:29:34.647233   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.647244   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:34.647255   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:34.647269   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:34.661390   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:34.661420   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:34.737590   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:34.737613   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:34.737625   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:34.820682   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:34.820721   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:34.861697   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:34.861723   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:37.412384   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:37.426081   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:37.426162   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:37.461302   67607 cri.go:89] found id: ""
	I0829 20:29:37.461332   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.461342   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:37.461349   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:37.461416   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:37.500869   67607 cri.go:89] found id: ""
	I0829 20:29:37.500898   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.500908   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:37.500915   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:37.500970   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:37.536908   67607 cri.go:89] found id: ""
	I0829 20:29:37.536932   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.536942   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:37.536949   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:37.537010   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:37.571939   67607 cri.go:89] found id: ""
	I0829 20:29:37.571969   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.571979   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:37.571987   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:37.572048   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:37.607834   67607 cri.go:89] found id: ""
	I0829 20:29:37.607864   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.607883   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:37.607891   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:37.607952   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:37.643932   67607 cri.go:89] found id: ""
	I0829 20:29:37.643963   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.643971   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:37.643978   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:37.644037   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:37.678148   67607 cri.go:89] found id: ""
	I0829 20:29:37.678177   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.678188   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:37.678195   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:37.678257   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:37.713170   67607 cri.go:89] found id: ""
	I0829 20:29:37.713195   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.713209   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:37.713219   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:37.713233   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:37.752538   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:37.752567   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:37.802888   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:37.802923   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:37.816546   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:37.816585   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:37.891647   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:37.891667   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:37.891680   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:35.851668   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:38.351371   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:36.208220   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:38.708523   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:36.994441   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:39.493291   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:40.472354   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:40.486186   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:40.486252   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:40.520935   67607 cri.go:89] found id: ""
	I0829 20:29:40.520963   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.520971   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:40.520977   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:40.521037   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:40.561399   67607 cri.go:89] found id: ""
	I0829 20:29:40.561428   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.561440   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:40.561447   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:40.561514   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:40.601821   67607 cri.go:89] found id: ""
	I0829 20:29:40.601846   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.601855   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:40.601862   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:40.601918   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:40.636429   67607 cri.go:89] found id: ""
	I0829 20:29:40.636454   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.636462   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:40.636468   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:40.636525   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:40.670781   67607 cri.go:89] found id: ""
	I0829 20:29:40.670816   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.670828   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:40.670836   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:40.670912   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:40.706635   67607 cri.go:89] found id: ""
	I0829 20:29:40.706663   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.706674   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:40.706682   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:40.706739   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:40.741657   67607 cri.go:89] found id: ""
	I0829 20:29:40.741687   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.741695   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:40.741707   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:40.741770   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:40.777028   67607 cri.go:89] found id: ""
	I0829 20:29:40.777057   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.777066   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:40.777077   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:40.777093   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:40.829387   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:40.829424   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:40.843928   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:40.843956   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:40.917965   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:40.917992   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:40.918008   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:41.001880   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:41.001925   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:43.549007   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:43.563446   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:43.563502   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:43.598503   67607 cri.go:89] found id: ""
	I0829 20:29:43.598548   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.598557   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:43.598564   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:43.598614   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:43.634169   67607 cri.go:89] found id: ""
	I0829 20:29:43.634200   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.634210   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:43.634218   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:43.634280   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:43.670467   67607 cri.go:89] found id: ""
	I0829 20:29:43.670492   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.670500   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:43.670506   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:43.670580   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:43.706812   67607 cri.go:89] found id: ""
	I0829 20:29:43.706839   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.706849   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:43.706857   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:43.706922   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:43.741577   67607 cri.go:89] found id: ""
	I0829 20:29:43.741606   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.741612   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:43.741620   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:43.741700   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:43.776552   67607 cri.go:89] found id: ""
	I0829 20:29:43.776595   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.776625   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:43.776635   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:43.776701   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:43.816229   67607 cri.go:89] found id: ""
	I0829 20:29:43.816264   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.816274   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:43.816281   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:43.816346   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:40.850705   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:42.850904   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:40.709080   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:43.207700   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:41.994216   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:44.492986   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:46.494171   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:43.860726   67607 cri.go:89] found id: ""
	I0829 20:29:43.860753   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.860761   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:43.860768   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:43.860783   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:43.874311   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:43.874340   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:43.952243   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:43.952272   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:43.952288   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:44.032276   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:44.032312   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:44.075537   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:44.075571   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:46.632798   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:46.645878   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:46.645948   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:46.683682   67607 cri.go:89] found id: ""
	I0829 20:29:46.683711   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.683720   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:46.683726   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:46.683775   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:46.727985   67607 cri.go:89] found id: ""
	I0829 20:29:46.728012   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.728024   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:46.728031   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:46.728090   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:46.762142   67607 cri.go:89] found id: ""
	I0829 20:29:46.762166   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.762174   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:46.762180   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:46.762226   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:46.802423   67607 cri.go:89] found id: ""
	I0829 20:29:46.802453   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.802464   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:46.802471   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:46.802515   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:46.840382   67607 cri.go:89] found id: ""
	I0829 20:29:46.840411   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.840418   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:46.840425   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:46.840473   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:46.878438   67607 cri.go:89] found id: ""
	I0829 20:29:46.878466   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.878476   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:46.878483   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:46.878562   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:46.913589   67607 cri.go:89] found id: ""
	I0829 20:29:46.913618   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.913625   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:46.913631   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:46.913678   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:46.948894   67607 cri.go:89] found id: ""
	I0829 20:29:46.948922   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.948929   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:46.948938   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:46.948949   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:47.005709   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:47.005745   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:47.030316   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:47.030343   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:47.105899   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:47.105920   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:47.105932   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:47.189405   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:47.189442   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:45.352639   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:47.850647   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:45.709140   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:48.207411   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:48.994239   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:51.493287   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:49.727745   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:49.742061   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:49.742131   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:49.777428   67607 cri.go:89] found id: ""
	I0829 20:29:49.777456   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.777464   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:49.777471   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:49.777531   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:49.811611   67607 cri.go:89] found id: ""
	I0829 20:29:49.811639   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.811646   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:49.811653   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:49.811709   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:49.844962   67607 cri.go:89] found id: ""
	I0829 20:29:49.844987   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.844995   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:49.845006   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:49.845062   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:49.880259   67607 cri.go:89] found id: ""
	I0829 20:29:49.880286   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.880297   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:49.880305   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:49.880366   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:49.915889   67607 cri.go:89] found id: ""
	I0829 20:29:49.915918   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.915926   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:49.915932   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:49.915988   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:49.953146   67607 cri.go:89] found id: ""
	I0829 20:29:49.953174   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.953182   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:49.953189   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:49.953240   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:49.990689   67607 cri.go:89] found id: ""
	I0829 20:29:49.990721   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.990730   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:49.990738   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:49.990792   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:50.024775   67607 cri.go:89] found id: ""
	I0829 20:29:50.024806   67607 logs.go:276] 0 containers: []
	W0829 20:29:50.024817   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:50.024827   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:50.024842   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:50.079030   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:50.079064   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:50.093178   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:50.093205   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:50.171476   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:50.171499   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:50.171512   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:50.252913   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:50.252946   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:52.799818   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:52.812857   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:52.812930   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:52.850736   67607 cri.go:89] found id: ""
	I0829 20:29:52.850761   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.850770   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:52.850777   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:52.850834   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:52.888892   67607 cri.go:89] found id: ""
	I0829 20:29:52.888916   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.888923   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:52.888929   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:52.888975   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:52.925390   67607 cri.go:89] found id: ""
	I0829 20:29:52.925418   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.925428   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:52.925435   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:52.925501   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:52.960329   67607 cri.go:89] found id: ""
	I0829 20:29:52.960352   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.960360   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:52.960366   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:52.960413   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:52.994899   67607 cri.go:89] found id: ""
	I0829 20:29:52.994927   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.994935   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:52.994941   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:52.994995   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:53.033028   67607 cri.go:89] found id: ""
	I0829 20:29:53.033057   67607 logs.go:276] 0 containers: []
	W0829 20:29:53.033068   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:53.033076   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:53.033136   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:53.068353   67607 cri.go:89] found id: ""
	I0829 20:29:53.068381   67607 logs.go:276] 0 containers: []
	W0829 20:29:53.068389   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:53.068394   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:53.068441   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:53.104496   67607 cri.go:89] found id: ""
	I0829 20:29:53.104524   67607 logs.go:276] 0 containers: []
	W0829 20:29:53.104534   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:53.104545   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:53.104560   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:53.175777   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:53.175810   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:53.175827   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:53.257362   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:53.257396   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:53.295822   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:53.295850   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:53.351237   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:53.351263   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:49.851324   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:52.350768   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:50.707986   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:53.206918   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:53.494828   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:55.994443   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:55.864680   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:55.879324   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:55.879391   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:55.914454   67607 cri.go:89] found id: ""
	I0829 20:29:55.914479   67607 logs.go:276] 0 containers: []
	W0829 20:29:55.914490   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:55.914498   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:55.914592   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:55.953778   67607 cri.go:89] found id: ""
	I0829 20:29:55.953804   67607 logs.go:276] 0 containers: []
	W0829 20:29:55.953814   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:55.953821   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:55.953883   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:55.994659   67607 cri.go:89] found id: ""
	I0829 20:29:55.994681   67607 logs.go:276] 0 containers: []
	W0829 20:29:55.994689   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:55.994697   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:55.994768   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:56.031262   67607 cri.go:89] found id: ""
	I0829 20:29:56.031288   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.031299   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:56.031306   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:56.031366   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:56.063748   67607 cri.go:89] found id: ""
	I0829 20:29:56.063776   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.063785   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:56.063793   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:56.063883   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:56.098024   67607 cri.go:89] found id: ""
	I0829 20:29:56.098060   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.098068   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:56.098074   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:56.098127   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:56.141340   67607 cri.go:89] found id: ""
	I0829 20:29:56.141364   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.141374   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:56.141381   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:56.141440   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:56.176668   67607 cri.go:89] found id: ""
	I0829 20:29:56.176696   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.176707   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:56.176717   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:56.176731   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:56.216294   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:56.216322   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:56.269404   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:56.269440   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:56.283134   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:56.283160   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:56.355005   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:56.355023   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:56.355035   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:54.851658   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:57.350247   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:55.207477   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:57.708007   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:58.493689   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:00.998990   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:58.937406   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:58.950924   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:58.950981   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:58.986748   67607 cri.go:89] found id: ""
	I0829 20:29:58.986778   67607 logs.go:276] 0 containers: []
	W0829 20:29:58.986788   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:58.986795   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:58.986861   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:59.023737   67607 cri.go:89] found id: ""
	I0829 20:29:59.023763   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.023773   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:59.023780   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:59.023840   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:59.060245   67607 cri.go:89] found id: ""
	I0829 20:29:59.060274   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.060284   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:59.060291   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:59.060352   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:59.102467   67607 cri.go:89] found id: ""
	I0829 20:29:59.102493   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.102501   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:59.102507   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:59.102581   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:59.142601   67607 cri.go:89] found id: ""
	I0829 20:29:59.142625   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.142634   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:59.142647   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:59.142717   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:59.186683   67607 cri.go:89] found id: ""
	I0829 20:29:59.186707   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.186715   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:59.186723   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:59.186783   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:59.232104   67607 cri.go:89] found id: ""
	I0829 20:29:59.232136   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.232154   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:59.232162   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:59.232227   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:59.276416   67607 cri.go:89] found id: ""
	I0829 20:29:59.276442   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.276452   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:59.276462   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:59.276479   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:59.341741   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:59.341779   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:59.357312   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:59.357336   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:59.425653   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:59.425674   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:59.425689   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:59.505365   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:59.505403   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:02.049195   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:02.064558   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:02.064641   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:02.102141   67607 cri.go:89] found id: ""
	I0829 20:30:02.102188   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.102209   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:02.102217   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:02.102282   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:02.138610   67607 cri.go:89] found id: ""
	I0829 20:30:02.138640   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.138650   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:02.138658   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:02.138724   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:02.175391   67607 cri.go:89] found id: ""
	I0829 20:30:02.175423   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.175435   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:02.175442   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:02.175505   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:02.212956   67607 cri.go:89] found id: ""
	I0829 20:30:02.212981   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.212991   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:02.212998   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:02.213059   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:02.254444   67607 cri.go:89] found id: ""
	I0829 20:30:02.254467   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.254475   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:02.254481   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:02.254568   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:02.293232   67607 cri.go:89] found id: ""
	I0829 20:30:02.293260   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.293270   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:02.293277   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:02.293348   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:02.328300   67607 cri.go:89] found id: ""
	I0829 20:30:02.328329   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.328339   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:02.328346   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:02.328407   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:02.363467   67607 cri.go:89] found id: ""
	I0829 20:30:02.363495   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.363505   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:02.363514   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:02.363528   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:02.414357   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:02.414394   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:02.428229   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:02.428259   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:02.503640   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:02.503661   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:02.503674   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:02.584052   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:02.584087   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:59.352485   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:01.850334   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:59.717029   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:02.208354   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:03.494326   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:05.494833   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:05.124345   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:05.143530   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:05.143594   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:05.195985   67607 cri.go:89] found id: ""
	I0829 20:30:05.196014   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.196024   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:05.196032   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:05.196092   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:05.254315   67607 cri.go:89] found id: ""
	I0829 20:30:05.254343   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.254354   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:05.254362   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:05.254432   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:05.306756   67607 cri.go:89] found id: ""
	I0829 20:30:05.306781   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.306788   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:05.306794   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:05.306852   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:05.345200   67607 cri.go:89] found id: ""
	I0829 20:30:05.345225   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.345235   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:05.345242   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:05.345297   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:05.384038   67607 cri.go:89] found id: ""
	I0829 20:30:05.384064   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.384074   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:05.384081   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:05.384140   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:05.420177   67607 cri.go:89] found id: ""
	I0829 20:30:05.420201   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.420208   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:05.420214   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:05.420260   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:05.453492   67607 cri.go:89] found id: ""
	I0829 20:30:05.453513   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.453521   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:05.453526   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:05.453573   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:05.491591   67607 cri.go:89] found id: ""
	I0829 20:30:05.491618   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.491628   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:05.491638   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:05.491701   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:05.580458   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:05.580503   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:05.620137   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:05.620169   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:05.672137   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:05.672177   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:05.685946   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:05.685973   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:05.755176   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:08.256255   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:08.269099   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:08.269160   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:08.302552   67607 cri.go:89] found id: ""
	I0829 20:30:08.302578   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.302585   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:08.302591   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:08.302639   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:08.340683   67607 cri.go:89] found id: ""
	I0829 20:30:08.340711   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.340718   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:08.340726   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:08.340778   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:08.387389   67607 cri.go:89] found id: ""
	I0829 20:30:08.387416   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.387424   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:08.387430   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:08.387477   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:08.421303   67607 cri.go:89] found id: ""
	I0829 20:30:08.421330   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.421340   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:08.421348   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:08.421409   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:08.458648   67607 cri.go:89] found id: ""
	I0829 20:30:08.458677   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.458688   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:08.458695   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:08.458758   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:08.498748   67607 cri.go:89] found id: ""
	I0829 20:30:08.498776   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.498784   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:08.498790   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:08.498845   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:08.536859   67607 cri.go:89] found id: ""
	I0829 20:30:08.536889   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.536896   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:08.536902   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:08.536963   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:08.570685   67607 cri.go:89] found id: ""
	I0829 20:30:08.570713   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.570723   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:08.570734   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:08.570748   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:08.621904   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:08.621938   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:08.636367   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:08.636391   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:08.703796   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:08.703824   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:08.703838   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:08.785084   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:08.785120   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:04.350230   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:06.849598   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:08.850961   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:04.708012   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:07.206604   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:09.207368   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:07.993015   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:09.994043   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:11.326633   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:11.339570   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:11.339637   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:11.374132   67607 cri.go:89] found id: ""
	I0829 20:30:11.374155   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.374163   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:11.374169   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:11.374234   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:11.409004   67607 cri.go:89] found id: ""
	I0829 20:30:11.409036   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.409047   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:11.409054   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:11.409119   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:11.444598   67607 cri.go:89] found id: ""
	I0829 20:30:11.444625   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.444635   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:11.444643   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:11.444704   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:11.481912   67607 cri.go:89] found id: ""
	I0829 20:30:11.481942   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.481953   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:11.481961   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:11.482025   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:11.516436   67607 cri.go:89] found id: ""
	I0829 20:30:11.516466   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.516477   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:11.516483   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:11.516536   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:11.554762   67607 cri.go:89] found id: ""
	I0829 20:30:11.554787   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.554795   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:11.554801   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:11.554857   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:11.588902   67607 cri.go:89] found id: ""
	I0829 20:30:11.588931   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.588942   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:11.588950   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:11.589011   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:11.621346   67607 cri.go:89] found id: ""
	I0829 20:30:11.621368   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.621376   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:11.621383   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:11.621395   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:11.659671   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:11.659703   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:11.711288   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:11.711315   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:11.725285   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:11.725310   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:11.801713   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:11.801735   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:11.801750   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:10.851075   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:13.349510   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:11.208203   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:13.706599   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:12.494548   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:14.993188   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:14.382313   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:14.395852   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:14.395926   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:14.438735   67607 cri.go:89] found id: ""
	I0829 20:30:14.438762   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.438772   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:14.438778   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:14.438840   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:14.477886   67607 cri.go:89] found id: ""
	I0829 20:30:14.477928   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.477937   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:14.477943   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:14.478000   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:14.517627   67607 cri.go:89] found id: ""
	I0829 20:30:14.517654   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.517664   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:14.517670   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:14.517734   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:14.557247   67607 cri.go:89] found id: ""
	I0829 20:30:14.557272   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.557280   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:14.557286   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:14.557345   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:14.591364   67607 cri.go:89] found id: ""
	I0829 20:30:14.591388   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.591398   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:14.591406   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:14.591468   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:14.627517   67607 cri.go:89] found id: ""
	I0829 20:30:14.627539   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.627546   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:14.627551   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:14.627604   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:14.662388   67607 cri.go:89] found id: ""
	I0829 20:30:14.662409   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.662419   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:14.662432   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:14.662488   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:14.695277   67607 cri.go:89] found id: ""
	I0829 20:30:14.695307   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.695316   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:14.695324   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:14.695335   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:14.735824   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:14.735852   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:14.792607   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:14.792642   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:14.808881   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:14.808910   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:14.879804   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:14.879824   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:14.879837   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:17.459817   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:17.474813   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:17.474887   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:17.509885   67607 cri.go:89] found id: ""
	I0829 20:30:17.509913   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.509923   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:17.509930   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:17.509987   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:17.543931   67607 cri.go:89] found id: ""
	I0829 20:30:17.543959   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.543968   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:17.543973   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:17.544021   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:17.580944   67607 cri.go:89] found id: ""
	I0829 20:30:17.580972   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.580980   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:17.580986   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:17.581033   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:17.620061   67607 cri.go:89] found id: ""
	I0829 20:30:17.620088   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.620097   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:17.620103   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:17.620148   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:17.658675   67607 cri.go:89] found id: ""
	I0829 20:30:17.658706   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.658717   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:17.658724   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:17.658788   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:17.694424   67607 cri.go:89] found id: ""
	I0829 20:30:17.694453   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.694462   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:17.694467   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:17.694571   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:17.727425   67607 cri.go:89] found id: ""
	I0829 20:30:17.727450   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.727456   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:17.727462   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:17.727510   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:17.767915   67607 cri.go:89] found id: ""
	I0829 20:30:17.767946   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.767956   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:17.767965   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:17.767977   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:17.837556   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:17.837580   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:17.837593   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:17.921601   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:17.921638   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:17.960999   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:17.961026   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:18.013654   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:18.013691   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:15.351372   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:17.850896   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:16.206810   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:18.207702   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:16.993566   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:18.997786   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:21.493705   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:20.528244   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:20.542116   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:20.542190   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:20.578905   67607 cri.go:89] found id: ""
	I0829 20:30:20.578936   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.578947   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:20.578954   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:20.579003   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:20.613543   67607 cri.go:89] found id: ""
	I0829 20:30:20.613567   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.613574   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:20.613579   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:20.613627   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:20.649322   67607 cri.go:89] found id: ""
	I0829 20:30:20.649344   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.649352   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:20.649366   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:20.649429   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:20.684851   67607 cri.go:89] found id: ""
	I0829 20:30:20.684878   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.684886   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:20.684892   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:20.684950   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:20.722016   67607 cri.go:89] found id: ""
	I0829 20:30:20.722045   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.722054   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:20.722062   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:20.722125   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:20.757594   67607 cri.go:89] found id: ""
	I0829 20:30:20.757626   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.757637   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:20.757644   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:20.757707   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:20.793694   67607 cri.go:89] found id: ""
	I0829 20:30:20.793728   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.793738   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:20.793746   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:20.793812   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:20.829709   67607 cri.go:89] found id: ""
	I0829 20:30:20.829736   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.829747   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:20.829758   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:20.829782   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:20.888838   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:20.888888   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:20.903530   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:20.903556   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:20.972460   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:20.972488   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:20.972503   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:21.055556   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:21.055593   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:23.597355   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:23.611091   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:23.611162   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:23.649469   67607 cri.go:89] found id: ""
	I0829 20:30:23.649493   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.649501   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:23.649510   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:23.649562   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:23.684530   67607 cri.go:89] found id: ""
	I0829 20:30:23.684554   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.684561   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:23.684571   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:23.684625   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:23.720466   67607 cri.go:89] found id: ""
	I0829 20:30:23.720493   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.720503   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:23.720510   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:23.720563   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:23.755013   67607 cri.go:89] found id: ""
	I0829 20:30:23.755042   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.755053   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:23.755061   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:23.755127   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:23.795212   67607 cri.go:89] found id: ""
	I0829 20:30:23.795243   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.795254   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:23.795263   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:23.795320   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:20.349781   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:22.350157   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:20.707723   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:23.206214   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:23.994457   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:26.493771   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:23.832912   67607 cri.go:89] found id: ""
	I0829 20:30:23.832941   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.832951   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:23.832959   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:23.833015   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:23.869896   67607 cri.go:89] found id: ""
	I0829 20:30:23.869930   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.869939   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:23.869947   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:23.870011   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:23.908111   67607 cri.go:89] found id: ""
	I0829 20:30:23.908136   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.908145   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:23.908155   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:23.908170   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:23.988489   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:23.988510   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:23.988525   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:24.063246   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:24.063280   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:24.102943   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:24.102974   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:24.157255   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:24.157294   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:26.671966   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:26.684755   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:26.684830   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:26.721125   67607 cri.go:89] found id: ""
	I0829 20:30:26.721150   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.721158   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:26.721164   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:26.721219   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:26.756328   67607 cri.go:89] found id: ""
	I0829 20:30:26.756349   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.756356   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:26.756362   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:26.756420   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:26.791711   67607 cri.go:89] found id: ""
	I0829 20:30:26.791751   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.791763   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:26.791774   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:26.791857   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:26.827215   67607 cri.go:89] found id: ""
	I0829 20:30:26.827244   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.827254   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:26.827261   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:26.827321   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:26.863461   67607 cri.go:89] found id: ""
	I0829 20:30:26.863486   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.863497   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:26.863505   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:26.863569   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:26.900037   67607 cri.go:89] found id: ""
	I0829 20:30:26.900065   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.900075   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:26.900083   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:26.900139   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:26.937236   67607 cri.go:89] found id: ""
	I0829 20:30:26.937263   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.937274   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:26.937282   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:26.937340   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:26.970281   67607 cri.go:89] found id: ""
	I0829 20:30:26.970312   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.970322   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:26.970332   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:26.970345   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:27.041485   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:27.041511   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:27.041526   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:27.120774   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:27.120807   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:27.159656   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:27.159685   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:27.213322   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:27.213356   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:24.350464   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:26.351419   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:28.850079   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:25.207838   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:27.708107   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:28.993552   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:31.494259   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:29.729066   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:29.742044   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:29.742099   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:29.777426   67607 cri.go:89] found id: ""
	I0829 20:30:29.777454   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.777462   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:29.777468   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:29.777529   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:29.814353   67607 cri.go:89] found id: ""
	I0829 20:30:29.814381   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.814392   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:29.814401   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:29.814462   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:29.853754   67607 cri.go:89] found id: ""
	I0829 20:30:29.853783   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.853793   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:29.853801   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:29.853869   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:29.893966   67607 cri.go:89] found id: ""
	I0829 20:30:29.893991   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.893998   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:29.894003   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:29.894057   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:29.929452   67607 cri.go:89] found id: ""
	I0829 20:30:29.929483   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.929492   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:29.929502   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:29.929561   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:29.965880   67607 cri.go:89] found id: ""
	I0829 20:30:29.965906   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.965916   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:29.965924   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:29.965986   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:30.002192   67607 cri.go:89] found id: ""
	I0829 20:30:30.002226   67607 logs.go:276] 0 containers: []
	W0829 20:30:30.002237   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:30.002245   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:30.002320   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:30.037603   67607 cri.go:89] found id: ""
	I0829 20:30:30.037640   67607 logs.go:276] 0 containers: []
	W0829 20:30:30.037651   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:30.037662   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:30.037677   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:30.094128   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:30.094168   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:30.110667   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:30.110701   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:30.188355   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:30.188375   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:30.188388   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:30.270750   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:30.270785   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:32.809472   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:32.823099   67607 kubeadm.go:597] duration metric: took 4m3.15684598s to restartPrimaryControlPlane
	W0829 20:30:32.823188   67607 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 20:30:32.823224   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:30:33.322987   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:30:33.338134   67607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:30:33.348586   67607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:30:33.358672   67607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:30:33.358692   67607 kubeadm.go:157] found existing configuration files:
	
	I0829 20:30:33.358748   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:30:33.367955   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:30:33.368000   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:30:33.377565   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:30:33.386317   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:30:33.386377   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:30:33.396356   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:30:33.406228   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:30:33.406281   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:30:33.418323   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:30:33.427595   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:30:33.427657   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:30:33.437520   67607 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:30:33.511159   67607 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 20:30:33.511279   67607 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:30:33.669988   67607 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:30:33.670133   67607 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:30:33.670267   67607 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 20:30:33.859908   67607 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:30:30.850893   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:32.851574   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:30.207012   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:32.206405   66989 pod_ready.go:82] duration metric: took 4m0.005864609s for pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace to be "Ready" ...
	E0829 20:30:32.206426   66989 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0829 20:30:32.206433   66989 pod_ready.go:39] duration metric: took 4m5.570928284s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:30:32.206448   66989 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:30:32.206482   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:32.206528   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:32.260213   66989 cri.go:89] found id: "f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:32.260242   66989 cri.go:89] found id: ""
	I0829 20:30:32.260252   66989 logs.go:276] 1 containers: [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313]
	I0829 20:30:32.260314   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.265201   66989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:32.265276   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:32.307620   66989 cri.go:89] found id: "5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:32.307648   66989 cri.go:89] found id: ""
	I0829 20:30:32.307656   66989 logs.go:276] 1 containers: [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6]
	I0829 20:30:32.307701   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.312372   66989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:32.312430   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:32.350059   66989 cri.go:89] found id: "64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:32.350092   66989 cri.go:89] found id: ""
	I0829 20:30:32.350102   66989 logs.go:276] 1 containers: [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71]
	I0829 20:30:32.350158   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.354624   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:32.354681   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:32.393968   66989 cri.go:89] found id: "daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:32.393988   66989 cri.go:89] found id: ""
	I0829 20:30:32.393995   66989 logs.go:276] 1 containers: [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334]
	I0829 20:30:32.394039   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.398674   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:32.398745   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:32.433038   66989 cri.go:89] found id: "05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:32.433064   66989 cri.go:89] found id: ""
	I0829 20:30:32.433074   66989 logs.go:276] 1 containers: [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f]
	I0829 20:30:32.433118   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.436969   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:32.437028   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:32.472768   66989 cri.go:89] found id: "29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:32.472786   66989 cri.go:89] found id: ""
	I0829 20:30:32.472793   66989 logs.go:276] 1 containers: [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd]
	I0829 20:30:32.472842   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.477466   66989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:32.477536   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:32.514464   66989 cri.go:89] found id: ""
	I0829 20:30:32.514492   66989 logs.go:276] 0 containers: []
	W0829 20:30:32.514502   66989 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:32.514509   66989 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0829 20:30:32.514591   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0829 20:30:32.551429   66989 cri.go:89] found id: "668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:32.551452   66989 cri.go:89] found id: "585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:32.551456   66989 cri.go:89] found id: ""
	I0829 20:30:32.551463   66989 logs.go:276] 2 containers: [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523]
	I0829 20:30:32.551508   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.555697   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.559864   66989 logs.go:123] Gathering logs for kube-apiserver [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313] ...
	I0829 20:30:32.559883   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:32.609776   66989 logs.go:123] Gathering logs for coredns [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71] ...
	I0829 20:30:32.609803   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:32.648419   66989 logs.go:123] Gathering logs for kube-scheduler [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334] ...
	I0829 20:30:32.648446   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:32.685938   66989 logs.go:123] Gathering logs for storage-provisioner [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c] ...
	I0829 20:30:32.685969   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:32.728665   66989 logs.go:123] Gathering logs for container status ...
	I0829 20:30:32.728693   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:32.770030   66989 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:32.770068   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 20:30:32.907821   66989 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:32.907850   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:32.923119   66989 logs.go:123] Gathering logs for etcd [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6] ...
	I0829 20:30:32.923149   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:32.979819   66989 logs.go:123] Gathering logs for kube-proxy [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f] ...
	I0829 20:30:32.979853   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:33.020472   66989 logs.go:123] Gathering logs for kube-controller-manager [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd] ...
	I0829 20:30:33.020496   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:33.074802   66989 logs.go:123] Gathering logs for storage-provisioner [585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523] ...
	I0829 20:30:33.074838   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:33.112043   66989 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:33.112072   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:33.624274   66989 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:33.624316   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:33.861742   67607 out.go:235]   - Generating certificates and keys ...
	I0829 20:30:33.861849   67607 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:30:33.861946   67607 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:30:33.862075   67607 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:30:33.862174   67607 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:30:33.862276   67607 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:30:33.862366   67607 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:30:33.862467   67607 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:30:33.862573   67607 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:30:33.862794   67607 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:30:33.863226   67607 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:30:33.863323   67607 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:30:33.863417   67607 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:30:34.065914   67607 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:30:34.235581   67607 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:30:34.660452   67607 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:30:34.724718   67607 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:30:34.743897   67607 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:30:34.746263   67607 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:30:34.746369   67607 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:30:34.893824   67607 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:30:33.494825   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:35.994300   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:34.895805   67607 out.go:235]   - Booting up control plane ...
	I0829 20:30:34.895941   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:30:34.904294   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:30:34.915103   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:30:34.915744   67607 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:30:34.917923   67607 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 20:30:35.351975   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:37.352013   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:36.202184   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:36.218838   66989 api_server.go:72] duration metric: took 4m17.334186395s to wait for apiserver process to appear ...
	I0829 20:30:36.218870   66989 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:30:36.218910   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:36.218963   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:36.263205   66989 cri.go:89] found id: "f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:36.263233   66989 cri.go:89] found id: ""
	I0829 20:30:36.263243   66989 logs.go:276] 1 containers: [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313]
	I0829 20:30:36.263292   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.267466   66989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:36.267522   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:36.303894   66989 cri.go:89] found id: "5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:36.303930   66989 cri.go:89] found id: ""
	I0829 20:30:36.303938   66989 logs.go:276] 1 containers: [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6]
	I0829 20:30:36.303996   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.308089   66989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:36.308170   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:36.347320   66989 cri.go:89] found id: "64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:36.347392   66989 cri.go:89] found id: ""
	I0829 20:30:36.347414   66989 logs.go:276] 1 containers: [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71]
	I0829 20:30:36.347485   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.352121   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:36.352174   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:36.389760   66989 cri.go:89] found id: "daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:36.389784   66989 cri.go:89] found id: ""
	I0829 20:30:36.389793   66989 logs.go:276] 1 containers: [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334]
	I0829 20:30:36.389853   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.394860   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:36.394919   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:36.430562   66989 cri.go:89] found id: "05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:36.430587   66989 cri.go:89] found id: ""
	I0829 20:30:36.430597   66989 logs.go:276] 1 containers: [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f]
	I0829 20:30:36.430655   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.435151   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:36.435226   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:36.470714   66989 cri.go:89] found id: "29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:36.470742   66989 cri.go:89] found id: ""
	I0829 20:30:36.470750   66989 logs.go:276] 1 containers: [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd]
	I0829 20:30:36.470816   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.475382   66989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:36.475446   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:36.514853   66989 cri.go:89] found id: ""
	I0829 20:30:36.514888   66989 logs.go:276] 0 containers: []
	W0829 20:30:36.514898   66989 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:36.514910   66989 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0829 20:30:36.514971   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0829 20:30:36.548229   66989 cri.go:89] found id: "668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:36.548252   66989 cri.go:89] found id: "585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:36.548256   66989 cri.go:89] found id: ""
	I0829 20:30:36.548263   66989 logs.go:276] 2 containers: [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523]
	I0829 20:30:36.548314   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.552484   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.556661   66989 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:36.556681   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:36.622985   66989 logs.go:123] Gathering logs for etcd [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6] ...
	I0829 20:30:36.623019   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:36.678770   66989 logs.go:123] Gathering logs for kube-controller-manager [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd] ...
	I0829 20:30:36.678799   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:36.731822   66989 logs.go:123] Gathering logs for storage-provisioner [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c] ...
	I0829 20:30:36.731849   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:36.768451   66989 logs.go:123] Gathering logs for storage-provisioner [585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523] ...
	I0829 20:30:36.768482   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:36.803818   66989 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:36.803846   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:37.225805   66989 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:37.225849   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:37.245421   66989 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:37.245458   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 20:30:37.358238   66989 logs.go:123] Gathering logs for kube-apiserver [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313] ...
	I0829 20:30:37.358266   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:37.401876   66989 logs.go:123] Gathering logs for coredns [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71] ...
	I0829 20:30:37.401913   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:37.438189   66989 logs.go:123] Gathering logs for kube-scheduler [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334] ...
	I0829 20:30:37.438223   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:37.475404   66989 logs.go:123] Gathering logs for kube-proxy [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f] ...
	I0829 20:30:37.475433   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:37.511876   66989 logs.go:123] Gathering logs for container status ...
	I0829 20:30:37.511903   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:38.493604   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:40.494396   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:40.054097   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:30:40.058474   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0829 20:30:40.059830   66989 api_server.go:141] control plane version: v1.31.0
	I0829 20:30:40.059850   66989 api_server.go:131] duration metric: took 3.840972907s to wait for apiserver health ...
	I0829 20:30:40.059857   66989 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:30:40.059877   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:40.059924   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:40.101978   66989 cri.go:89] found id: "f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:40.102003   66989 cri.go:89] found id: ""
	I0829 20:30:40.102013   66989 logs.go:276] 1 containers: [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313]
	I0829 20:30:40.102073   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.107429   66989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:40.107496   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:40.145052   66989 cri.go:89] found id: "5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:40.145078   66989 cri.go:89] found id: ""
	I0829 20:30:40.145086   66989 logs.go:276] 1 containers: [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6]
	I0829 20:30:40.145133   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.149329   66989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:40.149394   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:40.187740   66989 cri.go:89] found id: "64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:40.187769   66989 cri.go:89] found id: ""
	I0829 20:30:40.187778   66989 logs.go:276] 1 containers: [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71]
	I0829 20:30:40.187838   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.192085   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:40.192156   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:40.231992   66989 cri.go:89] found id: "daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:40.232010   66989 cri.go:89] found id: ""
	I0829 20:30:40.232017   66989 logs.go:276] 1 containers: [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334]
	I0829 20:30:40.232060   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.236275   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:40.236333   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:40.279637   66989 cri.go:89] found id: "05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:40.279660   66989 cri.go:89] found id: ""
	I0829 20:30:40.279669   66989 logs.go:276] 1 containers: [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f]
	I0829 20:30:40.279727   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.288800   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:40.288876   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:40.341222   66989 cri.go:89] found id: "29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:40.341248   66989 cri.go:89] found id: ""
	I0829 20:30:40.341258   66989 logs.go:276] 1 containers: [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd]
	I0829 20:30:40.341322   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.346013   66989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:40.346088   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:40.383801   66989 cri.go:89] found id: ""
	I0829 20:30:40.383828   66989 logs.go:276] 0 containers: []
	W0829 20:30:40.383836   66989 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:40.383842   66989 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0829 20:30:40.383896   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0829 20:30:40.421847   66989 cri.go:89] found id: "668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:40.421874   66989 cri.go:89] found id: "585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:40.421879   66989 cri.go:89] found id: ""
	I0829 20:30:40.421889   66989 logs.go:276] 2 containers: [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523]
	I0829 20:30:40.421950   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.426229   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.429902   66989 logs.go:123] Gathering logs for storage-provisioner [585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523] ...
	I0829 20:30:40.429931   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:40.471015   66989 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:40.471039   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:40.831575   66989 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:40.831612   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:40.846195   66989 logs.go:123] Gathering logs for etcd [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6] ...
	I0829 20:30:40.846230   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:40.905469   66989 logs.go:123] Gathering logs for kube-scheduler [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334] ...
	I0829 20:30:40.905507   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:40.952303   66989 logs.go:123] Gathering logs for kube-proxy [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f] ...
	I0829 20:30:40.952337   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:41.001278   66989 logs.go:123] Gathering logs for kube-controller-manager [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd] ...
	I0829 20:30:41.001309   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:41.071045   66989 logs.go:123] Gathering logs for container status ...
	I0829 20:30:41.071089   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:41.120024   66989 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:41.120050   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:41.191412   66989 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:41.191445   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 20:30:41.321848   66989 logs.go:123] Gathering logs for kube-apiserver [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313] ...
	I0829 20:30:41.321874   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:41.370807   66989 logs.go:123] Gathering logs for coredns [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71] ...
	I0829 20:30:41.370833   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:41.405913   66989 logs.go:123] Gathering logs for storage-provisioner [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c] ...
	I0829 20:30:41.405939   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:43.948957   66989 system_pods.go:59] 8 kube-system pods found
	I0829 20:30:43.948987   66989 system_pods.go:61] "coredns-6f6b679f8f-dg6t6" [92e89b20-ebf4-4738-8ca7-9dc2a0e5653a] Running
	I0829 20:30:43.948992   66989 system_pods.go:61] "etcd-embed-certs-388383" [a688325a-9ed2-488d-a1a1-aa440e37fa9f] Running
	I0829 20:30:43.948996   66989 system_pods.go:61] "kube-apiserver-embed-certs-388383" [7a1b715b-87a3-44e0-868d-a3184f5b9f61] Running
	I0829 20:30:43.948999   66989 system_pods.go:61] "kube-controller-manager-embed-certs-388383" [9d942083-4d39-448c-8151-424ea9d5e6af] Running
	I0829 20:30:43.949003   66989 system_pods.go:61] "kube-proxy-fcxs4" [649b40c8-4f4b-40d1-8179-baf378d4c7d7] Running
	I0829 20:30:43.949006   66989 system_pods.go:61] "kube-scheduler-embed-certs-388383" [87b73013-dfad-411d-aaa9-f2c0e39fb920] Running
	I0829 20:30:43.949011   66989 system_pods.go:61] "metrics-server-6867b74b74-mx5jh" [99e21acd-b7b8-4e6f-8c75-c112206aed89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:30:43.949015   66989 system_pods.go:61] "storage-provisioner" [021ca156-b7a8-4647-8efe-db17968fd5a8] Running
	I0829 20:30:43.949022   66989 system_pods.go:74] duration metric: took 3.889159839s to wait for pod list to return data ...
	I0829 20:30:43.949028   66989 default_sa.go:34] waiting for default service account to be created ...
	I0829 20:30:43.951906   66989 default_sa.go:45] found service account: "default"
	I0829 20:30:43.951932   66989 default_sa.go:55] duration metric: took 2.897769ms for default service account to be created ...
	I0829 20:30:43.951943   66989 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 20:30:43.959246   66989 system_pods.go:86] 8 kube-system pods found
	I0829 20:30:43.959269   66989 system_pods.go:89] "coredns-6f6b679f8f-dg6t6" [92e89b20-ebf4-4738-8ca7-9dc2a0e5653a] Running
	I0829 20:30:43.959275   66989 system_pods.go:89] "etcd-embed-certs-388383" [a688325a-9ed2-488d-a1a1-aa440e37fa9f] Running
	I0829 20:30:43.959279   66989 system_pods.go:89] "kube-apiserver-embed-certs-388383" [7a1b715b-87a3-44e0-868d-a3184f5b9f61] Running
	I0829 20:30:43.959283   66989 system_pods.go:89] "kube-controller-manager-embed-certs-388383" [9d942083-4d39-448c-8151-424ea9d5e6af] Running
	I0829 20:30:43.959286   66989 system_pods.go:89] "kube-proxy-fcxs4" [649b40c8-4f4b-40d1-8179-baf378d4c7d7] Running
	I0829 20:30:43.959290   66989 system_pods.go:89] "kube-scheduler-embed-certs-388383" [87b73013-dfad-411d-aaa9-f2c0e39fb920] Running
	I0829 20:30:43.959296   66989 system_pods.go:89] "metrics-server-6867b74b74-mx5jh" [99e21acd-b7b8-4e6f-8c75-c112206aed89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:30:43.959302   66989 system_pods.go:89] "storage-provisioner" [021ca156-b7a8-4647-8efe-db17968fd5a8] Running
	I0829 20:30:43.959309   66989 system_pods.go:126] duration metric: took 7.361244ms to wait for k8s-apps to be running ...
	I0829 20:30:43.959318   66989 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 20:30:43.959356   66989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:30:43.976136   66989 system_svc.go:56] duration metric: took 16.811475ms WaitForService to wait for kubelet
	I0829 20:30:43.976167   66989 kubeadm.go:582] duration metric: took 4m25.091518378s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:30:43.976193   66989 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:30:43.979345   66989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:30:43.979376   66989 node_conditions.go:123] node cpu capacity is 2
	I0829 20:30:43.979386   66989 node_conditions.go:105] duration metric: took 3.187489ms to run NodePressure ...
	I0829 20:30:43.979396   66989 start.go:241] waiting for startup goroutines ...
	I0829 20:30:43.979402   66989 start.go:246] waiting for cluster config update ...
	I0829 20:30:43.979414   66989 start.go:255] writing updated cluster config ...
	I0829 20:30:43.979729   66989 ssh_runner.go:195] Run: rm -f paused
	I0829 20:30:44.028715   66989 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 20:30:44.030675   66989 out.go:177] * Done! kubectl is now configured to use "embed-certs-388383" cluster and "default" namespace by default
	I0829 20:30:39.850811   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:41.850941   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:42.993711   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:45.492729   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:44.351171   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:46.849842   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:48.851125   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:47.494031   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:49.993291   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:51.350926   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:53.850966   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:52.494604   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:54.994054   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:56.350237   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:58.856068   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:56.994483   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:59.494879   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:01.351293   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:03.850415   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:01.994470   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:04.493393   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:05.851663   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:08.350513   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:06.988349   68084 pod_ready.go:82] duration metric: took 4m0.000994859s for pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace to be "Ready" ...
	E0829 20:31:06.988378   68084 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 20:31:06.988396   68084 pod_ready.go:39] duration metric: took 4m13.5587561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:31:06.988421   68084 kubeadm.go:597] duration metric: took 4m20.63419422s to restartPrimaryControlPlane
	W0829 20:31:06.988470   68084 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 20:31:06.988492   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:31:10.350782   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:12.851120   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:14.919490   67607 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 20:31:14.920124   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:14.920395   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:15.350794   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:17.351675   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:19.920740   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:19.920993   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:19.858714   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:22.351208   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:24.851679   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:27.351087   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:33.177614   68084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.189095849s)
	I0829 20:31:33.177712   68084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:31:33.202840   68084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:31:33.220648   68084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:31:33.239458   68084 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:31:33.239479   68084 kubeadm.go:157] found existing configuration files:
	
	I0829 20:31:33.239519   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0829 20:31:33.257831   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:31:33.257900   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:31:33.272621   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0829 20:31:33.287906   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:31:33.287975   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:31:33.302931   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0829 20:31:33.312359   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:31:33.312411   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:31:33.322850   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0829 20:31:33.332224   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:31:33.332280   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:31:33.342072   68084 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:31:33.388790   68084 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 20:31:33.388844   68084 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:31:33.506108   68084 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:31:33.506263   68084 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:31:33.506403   68084 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 20:31:33.515467   68084 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:31:29.921355   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:29.921591   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:29.351212   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:31.351683   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:33.850337   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:33.517487   68084 out.go:235]   - Generating certificates and keys ...
	I0829 20:31:33.517590   68084 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:31:33.517697   68084 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:31:33.517809   68084 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:31:33.517907   68084 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:31:33.518009   68084 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:31:33.518086   68084 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:31:33.518174   68084 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:31:33.518266   68084 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:31:33.518379   68084 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:31:33.518495   68084 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:31:33.518567   68084 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:31:33.518656   68084 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:31:33.888310   68084 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:31:34.000803   68084 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 20:31:34.103016   68084 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:31:34.461677   68084 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:31:34.617814   68084 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:31:34.618316   68084 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:31:34.622440   68084 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:31:34.624324   68084 out.go:235]   - Booting up control plane ...
	I0829 20:31:34.624428   68084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:31:34.624527   68084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:31:34.624882   68084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:31:34.647388   68084 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:31:34.653776   68084 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:31:34.653864   68084 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:31:34.795338   68084 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 20:31:34.795463   68084 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 20:31:35.797126   68084 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001854627s
	I0829 20:31:35.797253   68084 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 20:31:35.852495   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:37.344608   66841 pod_ready.go:82] duration metric: took 4m0.000461851s for pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace to be "Ready" ...
	E0829 20:31:37.344637   66841 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 20:31:37.344661   66841 pod_ready.go:39] duration metric: took 4m13.033970527s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:31:37.344693   66841 kubeadm.go:597] duration metric: took 4m20.095743839s to restartPrimaryControlPlane
	W0829 20:31:37.344752   66841 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 20:31:37.344780   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:31:40.799092   68084 kubeadm.go:310] [api-check] The API server is healthy after 5.002121632s
	I0829 20:31:40.813865   68084 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 20:31:40.829677   68084 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 20:31:40.870324   68084 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 20:31:40.870598   68084 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-145096 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 20:31:40.889024   68084 kubeadm.go:310] [bootstrap-token] Using token: gy9sl5.6oyya9sd2gbep67e
	I0829 20:31:40.890947   68084 out.go:235]   - Configuring RBAC rules ...
	I0829 20:31:40.891083   68084 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 20:31:40.898748   68084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 20:31:40.912914   68084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 20:31:40.916739   68084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 20:31:40.923995   68084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 20:31:40.930447   68084 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 20:31:41.206632   68084 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 20:31:41.679673   68084 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 20:31:42.206707   68084 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 20:31:42.206733   68084 kubeadm.go:310] 
	I0829 20:31:42.206819   68084 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 20:31:42.206830   68084 kubeadm.go:310] 
	I0829 20:31:42.206974   68084 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 20:31:42.206996   68084 kubeadm.go:310] 
	I0829 20:31:42.207018   68084 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 20:31:42.207073   68084 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 20:31:42.207120   68084 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 20:31:42.207127   68084 kubeadm.go:310] 
	I0829 20:31:42.207189   68084 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 20:31:42.207196   68084 kubeadm.go:310] 
	I0829 20:31:42.207234   68084 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 20:31:42.207238   68084 kubeadm.go:310] 
	I0829 20:31:42.207285   68084 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 20:31:42.207382   68084 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 20:31:42.207473   68084 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 20:31:42.207484   68084 kubeadm.go:310] 
	I0829 20:31:42.207611   68084 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 20:31:42.207727   68084 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 20:31:42.207736   68084 kubeadm.go:310] 
	I0829 20:31:42.207854   68084 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token gy9sl5.6oyya9sd2gbep67e \
	I0829 20:31:42.207962   68084 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef \
	I0829 20:31:42.207983   68084 kubeadm.go:310] 	--control-plane 
	I0829 20:31:42.207986   68084 kubeadm.go:310] 
	I0829 20:31:42.208087   68084 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 20:31:42.208106   68084 kubeadm.go:310] 
	I0829 20:31:42.208214   68084 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token gy9sl5.6oyya9sd2gbep67e \
	I0829 20:31:42.208342   68084 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef 
	I0829 20:31:42.209248   68084 kubeadm.go:310] W0829 20:31:33.349141    2513 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 20:31:42.209595   68084 kubeadm.go:310] W0829 20:31:33.349919    2513 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 20:31:42.209769   68084 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:31:42.209803   68084 cni.go:84] Creating CNI manager for ""
	I0829 20:31:42.209817   68084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:31:42.211545   68084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:31:42.212889   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:31:42.223984   68084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:31:42.242703   68084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 20:31:42.242779   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-145096 minikube.k8s.io/updated_at=2024_08_29T20_31_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033 minikube.k8s.io/name=default-k8s-diff-port-145096 minikube.k8s.io/primary=true
	I0829 20:31:42.242779   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:42.448824   68084 ops.go:34] apiserver oom_adj: -16
	I0829 20:31:42.453004   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:42.953891   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:43.453922   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:43.953465   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:44.453647   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:44.954035   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:45.453660   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:45.953536   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:46.046900   68084 kubeadm.go:1113] duration metric: took 3.804195127s to wait for elevateKubeSystemPrivileges
	I0829 20:31:46.046927   68084 kubeadm.go:394] duration metric: took 4m59.74590678s to StartCluster
	I0829 20:31:46.046947   68084 settings.go:142] acquiring lock: {Name:mka4cd5ddff5796cd0ca11509c181178f4f73529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:31:46.047046   68084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:31:46.048617   68084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:31:46.048876   68084 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 20:31:46.048979   68084 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 20:31:46.049063   68084 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-145096"
	I0829 20:31:46.049090   68084 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-145096"
	I0829 20:31:46.049090   68084 config.go:182] Loaded profile config "default-k8s-diff-port-145096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:31:46.049099   68084 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-145096"
	I0829 20:31:46.049136   68084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-145096"
	W0829 20:31:46.049143   68084 addons.go:243] addon storage-provisioner should already be in state true
	I0829 20:31:46.049174   68084 host.go:66] Checking if "default-k8s-diff-port-145096" exists ...
	I0829 20:31:46.049104   68084 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-145096"
	I0829 20:31:46.049264   68084 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-145096"
	W0829 20:31:46.049280   68084 addons.go:243] addon metrics-server should already be in state true
	I0829 20:31:46.049335   68084 host.go:66] Checking if "default-k8s-diff-port-145096" exists ...
	I0829 20:31:46.049569   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.049574   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.049595   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.049599   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.049698   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.049722   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.050441   68084 out.go:177] * Verifying Kubernetes components...
	I0829 20:31:46.052039   68084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:31:46.065735   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39367
	I0829 20:31:46.065909   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32931
	I0829 20:31:46.066241   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.066344   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.066900   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.066918   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.067024   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.067045   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.067438   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.067481   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.067665   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:31:46.067902   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.067931   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.069157   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41005
	I0829 20:31:46.070637   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.070757   68084 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-145096"
	W0829 20:31:46.070771   68084 addons.go:243] addon default-storageclass should already be in state true
	I0829 20:31:46.070803   68084 host.go:66] Checking if "default-k8s-diff-port-145096" exists ...
	I0829 20:31:46.071118   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.071124   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.071132   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.071155   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.071510   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.072052   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.072095   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.085524   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39387
	I0829 20:31:46.085987   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.086553   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.086576   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.086966   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.087138   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:31:46.087202   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43235
	I0829 20:31:46.087621   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.088358   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.088381   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.088708   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.088806   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:31:46.089193   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.089363   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.090878   68084 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:31:46.091571   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42413
	I0829 20:31:46.092208   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.092291   68084 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:31:46.092316   68084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 20:31:46.092337   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:31:46.092660   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.092687   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.093044   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.093230   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:31:46.095184   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:31:46.096265   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.096792   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:31:46.096821   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.097088   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:31:46.097274   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:31:46.097433   68084 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 20:31:46.097448   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:31:46.097645   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:31:46.098681   68084 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 20:31:46.098697   68084 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 20:31:46.098715   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:31:46.101604   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.101993   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:31:46.102014   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.102328   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:31:46.102529   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:31:46.102687   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:31:46.102847   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:31:46.108154   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32805
	I0829 20:31:46.108627   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.109111   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.109129   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.109446   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.109675   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:31:46.111174   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:31:46.111440   68084 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 20:31:46.111452   68084 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 20:31:46.111469   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:31:46.114302   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.114805   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:31:46.114832   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.114921   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:31:46.115102   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:31:46.115256   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:31:46.115400   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:31:46.277748   68084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:31:46.297001   68084 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-145096" to be "Ready" ...
	I0829 20:31:46.317473   68084 node_ready.go:49] node "default-k8s-diff-port-145096" has status "Ready":"True"
	I0829 20:31:46.317498   68084 node_ready.go:38] duration metric: took 20.469679ms for node "default-k8s-diff-port-145096" to be "Ready" ...
	I0829 20:31:46.317509   68084 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:31:46.332180   68084 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:46.393588   68084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:31:46.399404   68084 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 20:31:46.399428   68084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 20:31:46.453014   68084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 20:31:46.460100   68084 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 20:31:46.460126   68084 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 20:31:46.541980   68084 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:31:46.542002   68084 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 20:31:46.607148   68084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:31:47.296344   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.296370   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.296445   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.296471   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.296678   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.296722   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.296744   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.296764   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.298376   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:47.298379   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.298404   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.298412   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:47.298420   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.298436   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.298453   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.298464   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.298700   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.298726   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:47.298729   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.318720   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.318745   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.319031   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:47.319053   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.319069   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.870171   68084 pod_ready.go:93] pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:31:47.870198   68084 pod_ready.go:82] duration metric: took 1.537994965s for pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:47.870208   68084 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:48.057308   68084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.450120563s)
	I0829 20:31:48.057358   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:48.057371   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:48.057667   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:48.057722   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:48.057734   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:48.057747   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:48.057759   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:48.057989   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:48.058005   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:48.058021   68084 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-145096"
	I0829 20:31:48.059886   68084 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0829 20:31:48.061124   68084 addons.go:510] duration metric: took 2.012141801s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0829 20:31:48.875874   68084 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:31:48.875897   68084 pod_ready.go:82] duration metric: took 1.005682325s for pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:48.875912   68084 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:48.879828   68084 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:31:48.879846   68084 pod_ready.go:82] duration metric: took 3.928263ms for pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:48.879863   68084 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:50.886764   68084 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:49.922318   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:49.922554   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:52.887708   68084 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:55.387571   68084 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:55.886194   68084 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:31:55.886217   68084 pod_ready.go:82] duration metric: took 7.006347256s for pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:55.886225   68084 pod_ready.go:39] duration metric: took 9.568704494s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:31:55.886238   68084 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:31:55.886286   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:31:55.901604   68084 api_server.go:72] duration metric: took 9.852691692s to wait for apiserver process to appear ...
	I0829 20:31:55.901628   68084 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:31:55.901643   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:31:55.905564   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 200:
	ok
	I0829 20:31:55.906387   68084 api_server.go:141] control plane version: v1.31.0
	I0829 20:31:55.906406   68084 api_server.go:131] duration metric: took 4.772472ms to wait for apiserver health ...
	I0829 20:31:55.906413   68084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:31:55.911423   68084 system_pods.go:59] 9 kube-system pods found
	I0829 20:31:55.911444   68084 system_pods.go:61] "coredns-6f6b679f8f-l25kd" [86947930-0d47-407a-b876-b482596fbe8f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:31:55.911451   68084 system_pods.go:61] "coredns-6f6b679f8f-lnm92" [a6caefe0-e883-4460-87de-25ee97191e1a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:31:55.911458   68084 system_pods.go:61] "etcd-default-k8s-diff-port-145096" [caba3f17-6544-4fe0-8dd3-0dd95e8df8ce] Running
	I0829 20:31:55.911465   68084 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-145096" [9b1ca00a-613b-414f-81e9-601d53d43207] Running
	I0829 20:31:55.911470   68084 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-145096" [e7145779-85cf-458d-9870-6fda4853d29d] Running
	I0829 20:31:55.911479   68084 system_pods.go:61] "kube-proxy-ptswc" [96c01414-e8e8-4731-824b-11d636285fb3] Running
	I0829 20:31:55.911488   68084 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-145096" [0d2cc607-72ac-4417-8a7c-196bf3ec90d7] Running
	I0829 20:31:55.911495   68084 system_pods.go:61] "metrics-server-6867b74b74-6sdqg" [2c9efadb-89bb-4aa6-b0f0-ddcb3e931674] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:31:55.911503   68084 system_pods.go:61] "storage-provisioner" [81531989-d045-44fb-b1a1-0817af27c804] Running
	I0829 20:31:55.911512   68084 system_pods.go:74] duration metric: took 5.092824ms to wait for pod list to return data ...
	I0829 20:31:55.911523   68084 default_sa.go:34] waiting for default service account to be created ...
	I0829 20:31:55.913794   68084 default_sa.go:45] found service account: "default"
	I0829 20:31:55.913820   68084 default_sa.go:55] duration metric: took 2.286925ms for default service account to be created ...
	I0829 20:31:55.913830   68084 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 20:31:55.919628   68084 system_pods.go:86] 9 kube-system pods found
	I0829 20:31:55.919666   68084 system_pods.go:89] "coredns-6f6b679f8f-l25kd" [86947930-0d47-407a-b876-b482596fbe8f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:31:55.919677   68084 system_pods.go:89] "coredns-6f6b679f8f-lnm92" [a6caefe0-e883-4460-87de-25ee97191e1a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:31:55.919686   68084 system_pods.go:89] "etcd-default-k8s-diff-port-145096" [caba3f17-6544-4fe0-8dd3-0dd95e8df8ce] Running
	I0829 20:31:55.919693   68084 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-145096" [9b1ca00a-613b-414f-81e9-601d53d43207] Running
	I0829 20:31:55.919699   68084 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-145096" [e7145779-85cf-458d-9870-6fda4853d29d] Running
	I0829 20:31:55.919704   68084 system_pods.go:89] "kube-proxy-ptswc" [96c01414-e8e8-4731-824b-11d636285fb3] Running
	I0829 20:31:55.919710   68084 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-145096" [0d2cc607-72ac-4417-8a7c-196bf3ec90d7] Running
	I0829 20:31:55.919718   68084 system_pods.go:89] "metrics-server-6867b74b74-6sdqg" [2c9efadb-89bb-4aa6-b0f0-ddcb3e931674] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:31:55.919725   68084 system_pods.go:89] "storage-provisioner" [81531989-d045-44fb-b1a1-0817af27c804] Running
	I0829 20:31:55.919734   68084 system_pods.go:126] duration metric: took 5.897752ms to wait for k8s-apps to be running ...
	I0829 20:31:55.919745   68084 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 20:31:55.919800   68084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:31:55.935429   68084 system_svc.go:56] duration metric: took 15.676316ms WaitForService to wait for kubelet
	I0829 20:31:55.935460   68084 kubeadm.go:582] duration metric: took 9.886551311s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:31:55.935483   68084 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:31:55.938444   68084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:31:55.938466   68084 node_conditions.go:123] node cpu capacity is 2
	I0829 20:31:55.938476   68084 node_conditions.go:105] duration metric: took 2.988434ms to run NodePressure ...
	I0829 20:31:55.938486   68084 start.go:241] waiting for startup goroutines ...
	I0829 20:31:55.938493   68084 start.go:246] waiting for cluster config update ...
	I0829 20:31:55.938503   68084 start.go:255] writing updated cluster config ...
	I0829 20:31:55.938834   68084 ssh_runner.go:195] Run: rm -f paused
	I0829 20:31:55.987879   68084 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 20:31:55.989766   68084 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-145096" cluster and "default" namespace by default
	I0829 20:32:03.506190   66841 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.161387814s)
	I0829 20:32:03.506268   66841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:32:03.530660   66841 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:32:03.550784   66841 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:32:03.565054   66841 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:32:03.565085   66841 kubeadm.go:157] found existing configuration files:
	
	I0829 20:32:03.565131   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:32:03.586492   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:32:03.586577   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:32:03.605061   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:32:03.617990   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:32:03.618054   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:32:03.635587   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:32:03.645495   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:32:03.645559   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:32:03.655081   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:32:03.664640   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:32:03.664703   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:32:03.674097   66841 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:32:03.721087   66841 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 20:32:03.721155   66841 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:32:03.839829   66841 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:32:03.839985   66841 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:32:03.840079   66841 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 20:32:03.849047   66841 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:32:03.850883   66841 out.go:235]   - Generating certificates and keys ...
	I0829 20:32:03.850970   66841 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:32:03.851045   66841 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:32:03.851129   66841 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:32:03.851222   66841 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:32:03.851292   66841 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:32:03.851340   66841 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:32:03.851399   66841 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:32:03.851450   66841 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:32:03.851515   66841 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:32:03.851620   66841 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:32:03.851687   66841 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:32:03.851755   66841 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:32:03.968189   66841 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:32:04.253016   66841 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 20:32:04.341190   66841 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:32:04.491607   66841 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:32:04.616753   66841 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:32:04.617354   66841 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:32:04.619961   66841 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:32:04.621690   66841 out.go:235]   - Booting up control plane ...
	I0829 20:32:04.621799   66841 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:32:04.621910   66841 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:32:04.622021   66841 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:32:04.643758   66841 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:32:04.650541   66841 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:32:04.650612   66841 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:32:04.786596   66841 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 20:32:04.786755   66841 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 20:32:05.788381   66841 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001614523s
	I0829 20:32:05.788512   66841 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 20:32:10.789752   66841 kubeadm.go:310] [api-check] The API server is healthy after 5.001571241s
	I0829 20:32:10.803237   66841 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 20:32:10.822640   66841 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 20:32:10.845744   66841 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 20:32:10.846050   66841 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-397724 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 20:32:10.856315   66841 kubeadm.go:310] [bootstrap-token] Using token: 3k2s43.7gy6mzkt91kkied7
	I0829 20:32:10.857834   66841 out.go:235]   - Configuring RBAC rules ...
	I0829 20:32:10.857947   66841 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 20:32:10.867339   66841 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 20:32:10.876522   66841 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 20:32:10.879786   66841 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 20:32:10.885043   66841 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 20:32:10.892077   66841 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 20:32:11.196796   66841 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 20:32:11.630072   66841 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 20:32:12.200197   66841 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 20:32:12.200232   66841 kubeadm.go:310] 
	I0829 20:32:12.200314   66841 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 20:32:12.200326   66841 kubeadm.go:310] 
	I0829 20:32:12.200406   66841 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 20:32:12.200416   66841 kubeadm.go:310] 
	I0829 20:32:12.200450   66841 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 20:32:12.200536   66841 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 20:32:12.200606   66841 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 20:32:12.200616   66841 kubeadm.go:310] 
	I0829 20:32:12.200687   66841 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 20:32:12.200700   66841 kubeadm.go:310] 
	I0829 20:32:12.200744   66841 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 20:32:12.200750   66841 kubeadm.go:310] 
	I0829 20:32:12.200793   66841 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 20:32:12.200861   66841 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 20:32:12.200918   66841 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 20:32:12.200924   66841 kubeadm.go:310] 
	I0829 20:32:12.201048   66841 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 20:32:12.201144   66841 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 20:32:12.201152   66841 kubeadm.go:310] 
	I0829 20:32:12.201255   66841 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3k2s43.7gy6mzkt91kkied7 \
	I0829 20:32:12.201373   66841 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef \
	I0829 20:32:12.201400   66841 kubeadm.go:310] 	--control-plane 
	I0829 20:32:12.201411   66841 kubeadm.go:310] 
	I0829 20:32:12.201487   66841 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 20:32:12.201495   66841 kubeadm.go:310] 
	I0829 20:32:12.201574   66841 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3k2s43.7gy6mzkt91kkied7 \
	I0829 20:32:12.201710   66841 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef 
	I0829 20:32:12.202900   66841 kubeadm.go:310] W0829 20:32:03.691334    3057 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 20:32:12.203223   66841 kubeadm.go:310] W0829 20:32:03.692151    3057 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 20:32:12.203339   66841 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:32:12.203366   66841 cni.go:84] Creating CNI manager for ""
	I0829 20:32:12.203381   66841 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:32:12.205733   66841 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:32:12.206905   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:32:12.218121   66841 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:32:12.237885   66841 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 20:32:12.237989   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:12.238006   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-397724 minikube.k8s.io/updated_at=2024_08_29T20_32_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033 minikube.k8s.io/name=no-preload-397724 minikube.k8s.io/primary=true
	I0829 20:32:12.282191   66841 ops.go:34] apiserver oom_adj: -16
	I0829 20:32:12.430006   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:12.930327   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:13.430210   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:13.930065   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:14.430163   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:14.930189   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:15.430677   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:15.930670   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:16.430943   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:16.549095   66841 kubeadm.go:1113] duration metric: took 4.311165714s to wait for elevateKubeSystemPrivileges
	I0829 20:32:16.549136   66841 kubeadm.go:394] duration metric: took 4m59.355577107s to StartCluster
	I0829 20:32:16.549156   66841 settings.go:142] acquiring lock: {Name:mka4cd5ddff5796cd0ca11509c181178f4f73529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:32:16.549229   66841 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:32:16.550926   66841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:32:16.551141   66841 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 20:32:16.551202   66841 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 20:32:16.551291   66841 addons.go:69] Setting storage-provisioner=true in profile "no-preload-397724"
	I0829 20:32:16.551315   66841 addons.go:69] Setting default-storageclass=true in profile "no-preload-397724"
	I0829 20:32:16.551329   66841 config.go:182] Loaded profile config "no-preload-397724": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:32:16.551340   66841 addons.go:69] Setting metrics-server=true in profile "no-preload-397724"
	I0829 20:32:16.551389   66841 addons.go:234] Setting addon metrics-server=true in "no-preload-397724"
	W0829 20:32:16.551404   66841 addons.go:243] addon metrics-server should already be in state true
	I0829 20:32:16.551442   66841 host.go:66] Checking if "no-preload-397724" exists ...
	I0829 20:32:16.551360   66841 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-397724"
	I0829 20:32:16.551324   66841 addons.go:234] Setting addon storage-provisioner=true in "no-preload-397724"
	W0829 20:32:16.551673   66841 addons.go:243] addon storage-provisioner should already be in state true
	I0829 20:32:16.551705   66841 host.go:66] Checking if "no-preload-397724" exists ...
	I0829 20:32:16.551872   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.551873   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.551908   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.551929   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.552036   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.552065   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.552634   66841 out.go:177] * Verifying Kubernetes components...
	I0829 20:32:16.553973   66841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:32:16.567797   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43335
	I0829 20:32:16.568321   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.568884   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.568910   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.569328   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.569941   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.569978   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.573055   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40673
	I0829 20:32:16.573642   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36399
	I0829 20:32:16.573770   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.574303   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.574321   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.574394   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.574913   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.574933   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.574935   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.575471   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.575511   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.575724   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.575950   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:32:16.579912   66841 addons.go:234] Setting addon default-storageclass=true in "no-preload-397724"
	W0829 20:32:16.579932   66841 addons.go:243] addon default-storageclass should already be in state true
	I0829 20:32:16.579960   66841 host.go:66] Checking if "no-preload-397724" exists ...
	I0829 20:32:16.580281   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.580298   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.591264   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42469
	I0829 20:32:16.591442   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42753
	I0829 20:32:16.591777   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.591827   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.592275   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.592289   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.592289   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.592307   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.592702   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.592726   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.592881   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:32:16.592882   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:32:16.594494   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:32:16.594956   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:32:16.596431   66841 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:32:16.596433   66841 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 20:32:16.597503   66841 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 20:32:16.597524   66841 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 20:32:16.597547   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:32:16.597607   66841 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:32:16.597625   66841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 20:32:16.597641   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:32:16.598780   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32841
	I0829 20:32:16.599272   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.599915   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.599937   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.601210   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.601613   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.601965   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.602159   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:32:16.602190   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.602328   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.602867   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.602998   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:32:16.603188   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:32:16.603234   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:32:16.603287   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.603434   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:32:16.603487   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:32:16.603691   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:32:16.603708   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:32:16.603857   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:32:16.603977   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:32:16.619336   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37683
	I0829 20:32:16.619806   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.620269   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.620286   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.620604   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.620818   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:32:16.622348   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:32:16.622563   66841 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 20:32:16.622580   66841 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 20:32:16.622597   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:32:16.625203   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.625542   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:32:16.625570   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.625746   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:32:16.625934   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:32:16.626094   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:32:16.626266   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:32:16.787525   66841 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:32:16.817674   66841 node_ready.go:35] waiting up to 6m0s for node "no-preload-397724" to be "Ready" ...
	I0829 20:32:16.833992   66841 node_ready.go:49] node "no-preload-397724" has status "Ready":"True"
	I0829 20:32:16.834030   66841 node_ready.go:38] duration metric: took 16.322874ms for node "no-preload-397724" to be "Ready" ...
	I0829 20:32:16.834042   66841 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:32:16.843147   66841 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-crgtj" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:16.902589   66841 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 20:32:16.902613   66841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 20:32:16.902859   66841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 20:32:16.903193   66841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:32:16.922497   66841 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 20:32:16.922518   66841 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 20:32:16.966207   66841 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:32:16.966240   66841 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 20:32:17.004882   66841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:32:17.204576   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.204613   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.204968   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.204987   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:17.204995   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.204994   66841 main.go:141] libmachine: (no-preload-397724) DBG | Closing plugin on server side
	I0829 20:32:17.205002   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.205261   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.205278   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:17.211789   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.211811   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.212074   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.212089   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:17.212119   66841 main.go:141] libmachine: (no-preload-397724) DBG | Closing plugin on server side
	I0829 20:32:17.902866   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.902897   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.903218   66841 main.go:141] libmachine: (no-preload-397724) DBG | Closing plugin on server side
	I0829 20:32:17.903266   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.903278   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:17.903286   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.903296   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.903556   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.903572   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:18.344211   66841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.33928059s)
	I0829 20:32:18.344259   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:18.344274   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:18.344571   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:18.344589   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:18.344611   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:18.344626   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:18.344948   66841 main.go:141] libmachine: (no-preload-397724) DBG | Closing plugin on server side
	I0829 20:32:18.344980   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:18.345010   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:18.345025   66841 addons.go:475] Verifying addon metrics-server=true in "no-preload-397724"
	I0829 20:32:18.346919   66841 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0829 20:32:18.348704   66841 addons.go:510] duration metric: took 1.797503952s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0829 20:32:18.850832   66841 pod_ready.go:93] pod "coredns-6f6b679f8f-crgtj" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:18.850853   66841 pod_ready.go:82] duration metric: took 2.007683093s for pod "coredns-6f6b679f8f-crgtj" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:18.850862   66841 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dw2r7" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.357679   66841 pod_ready.go:93] pod "coredns-6f6b679f8f-dw2r7" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.357702   66841 pod_ready.go:82] duration metric: took 1.506832539s for pod "coredns-6f6b679f8f-dw2r7" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.357710   66841 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.361830   66841 pod_ready.go:93] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.361854   66841 pod_ready.go:82] duration metric: took 4.136801ms for pod "etcd-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.361865   66841 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.365719   66841 pod_ready.go:93] pod "kube-apiserver-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.365733   66841 pod_ready.go:82] duration metric: took 3.861894ms for pod "kube-apiserver-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.365741   66841 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.369596   66841 pod_ready.go:93] pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.369611   66841 pod_ready.go:82] duration metric: took 3.864669ms for pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.369619   66841 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f4x4j" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.447788   66841 pod_ready.go:93] pod "kube-proxy-f4x4j" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.447812   66841 pod_ready.go:82] duration metric: took 78.187574ms for pod "kube-proxy-f4x4j" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.447823   66841 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:22.049084   66841 pod_ready.go:93] pod "kube-scheduler-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:22.049105   66841 pod_ready.go:82] duration metric: took 1.601276793s for pod "kube-scheduler-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:22.049113   66841 pod_ready.go:39] duration metric: took 5.215058301s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:32:22.049125   66841 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:32:22.049172   66841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:32:22.066060   66841 api_server.go:72] duration metric: took 5.514888299s to wait for apiserver process to appear ...
	I0829 20:32:22.066086   66841 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:32:22.066109   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:32:22.072343   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 200:
	ok
	I0829 20:32:22.073798   66841 api_server.go:141] control plane version: v1.31.0
	I0829 20:32:22.073821   66841 api_server.go:131] duration metric: took 7.728095ms to wait for apiserver health ...
	I0829 20:32:22.073828   66841 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:32:22.252273   66841 system_pods.go:59] 9 kube-system pods found
	I0829 20:32:22.252302   66841 system_pods.go:61] "coredns-6f6b679f8f-crgtj" [c48571a8-18ae-4737-a05b-4a77736aee35] Running
	I0829 20:32:22.252309   66841 system_pods.go:61] "coredns-6f6b679f8f-dw2r7" [6edda799-e2d6-402b-b4cd-7e54b2b89ca5] Running
	I0829 20:32:22.252315   66841 system_pods.go:61] "etcd-no-preload-397724" [15473208-a76c-4bc5-810f-e78d59538493] Running
	I0829 20:32:22.252320   66841 system_pods.go:61] "kube-apiserver-no-preload-397724" [521c6041-888f-4145-aabb-54da7382953d] Running
	I0829 20:32:22.252325   66841 system_pods.go:61] "kube-controller-manager-no-preload-397724" [fd5afaf8-898d-4985-8efc-5628709a52cd] Running
	I0829 20:32:22.252329   66841 system_pods.go:61] "kube-proxy-f4x4j" [eb76dc5a-016a-416c-8880-f76fc2d2a9bb] Running
	I0829 20:32:22.252333   66841 system_pods.go:61] "kube-scheduler-no-preload-397724" [77d9e2de-ee8e-4cb2-a7f0-5d9b96bd9691] Running
	I0829 20:32:22.252342   66841 system_pods.go:61] "metrics-server-6867b74b74-nxdc5" [6061e81d-2f14-4c4a-9e0f-acb57dc9fb5a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:32:22.252348   66841 system_pods.go:61] "storage-provisioner" [8b6c02d6-7a39-4fea-80b4-4ba02904232c] Running
	I0829 20:32:22.252358   66841 system_pods.go:74] duration metric: took 178.523887ms to wait for pod list to return data ...
	I0829 20:32:22.252370   66841 default_sa.go:34] waiting for default service account to be created ...
	I0829 20:32:22.448475   66841 default_sa.go:45] found service account: "default"
	I0829 20:32:22.448499   66841 default_sa.go:55] duration metric: took 196.123693ms for default service account to be created ...
	I0829 20:32:22.448508   66841 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 20:32:22.650996   66841 system_pods.go:86] 9 kube-system pods found
	I0829 20:32:22.651023   66841 system_pods.go:89] "coredns-6f6b679f8f-crgtj" [c48571a8-18ae-4737-a05b-4a77736aee35] Running
	I0829 20:32:22.651029   66841 system_pods.go:89] "coredns-6f6b679f8f-dw2r7" [6edda799-e2d6-402b-b4cd-7e54b2b89ca5] Running
	I0829 20:32:22.651033   66841 system_pods.go:89] "etcd-no-preload-397724" [15473208-a76c-4bc5-810f-e78d59538493] Running
	I0829 20:32:22.651037   66841 system_pods.go:89] "kube-apiserver-no-preload-397724" [521c6041-888f-4145-aabb-54da7382953d] Running
	I0829 20:32:22.651042   66841 system_pods.go:89] "kube-controller-manager-no-preload-397724" [fd5afaf8-898d-4985-8efc-5628709a52cd] Running
	I0829 20:32:22.651045   66841 system_pods.go:89] "kube-proxy-f4x4j" [eb76dc5a-016a-416c-8880-f76fc2d2a9bb] Running
	I0829 20:32:22.651048   66841 system_pods.go:89] "kube-scheduler-no-preload-397724" [77d9e2de-ee8e-4cb2-a7f0-5d9b96bd9691] Running
	I0829 20:32:22.651054   66841 system_pods.go:89] "metrics-server-6867b74b74-nxdc5" [6061e81d-2f14-4c4a-9e0f-acb57dc9fb5a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:32:22.651058   66841 system_pods.go:89] "storage-provisioner" [8b6c02d6-7a39-4fea-80b4-4ba02904232c] Running
	I0829 20:32:22.651065   66841 system_pods.go:126] duration metric: took 202.552304ms to wait for k8s-apps to be running ...
	I0829 20:32:22.651071   66841 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 20:32:22.651111   66841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:32:22.666831   66841 system_svc.go:56] duration metric: took 15.753046ms WaitForService to wait for kubelet
	I0829 20:32:22.666863   66841 kubeadm.go:582] duration metric: took 6.115692499s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:32:22.666888   66841 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:32:22.848742   66841 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:32:22.848766   66841 node_conditions.go:123] node cpu capacity is 2
	I0829 20:32:22.848777   66841 node_conditions.go:105] duration metric: took 181.884368ms to run NodePressure ...
	I0829 20:32:22.848787   66841 start.go:241] waiting for startup goroutines ...
	I0829 20:32:22.848794   66841 start.go:246] waiting for cluster config update ...
	I0829 20:32:22.848803   66841 start.go:255] writing updated cluster config ...
	I0829 20:32:22.849030   66841 ssh_runner.go:195] Run: rm -f paused
	I0829 20:32:22.897503   66841 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 20:32:22.899404   66841 out.go:177] * Done! kubectl is now configured to use "no-preload-397724" cluster and "default" namespace by default
	I0829 20:32:29.924469   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:32:29.924707   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:32:29.924729   67607 kubeadm.go:310] 
	I0829 20:32:29.924801   67607 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 20:32:29.924855   67607 kubeadm.go:310] 		timed out waiting for the condition
	I0829 20:32:29.924865   67607 kubeadm.go:310] 
	I0829 20:32:29.924912   67607 kubeadm.go:310] 	This error is likely caused by:
	I0829 20:32:29.924960   67607 kubeadm.go:310] 		- The kubelet is not running
	I0829 20:32:29.925080   67607 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 20:32:29.925090   67607 kubeadm.go:310] 
	I0829 20:32:29.925207   67607 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 20:32:29.925256   67607 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 20:32:29.925316   67607 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 20:32:29.925342   67607 kubeadm.go:310] 
	I0829 20:32:29.925493   67607 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 20:32:29.925616   67607 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 20:32:29.925627   67607 kubeadm.go:310] 
	I0829 20:32:29.925776   67607 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 20:32:29.925909   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 20:32:29.926016   67607 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 20:32:29.926134   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 20:32:29.926154   67607 kubeadm.go:310] 
	I0829 20:32:29.926605   67607 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:32:29.926723   67607 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 20:32:29.926812   67607 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0829 20:32:29.926935   67607 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0829 20:32:29.926979   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:32:30.389951   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:32:30.408455   67607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:32:30.418493   67607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:32:30.418513   67607 kubeadm.go:157] found existing configuration files:
	
	I0829 20:32:30.418582   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:32:30.427909   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:32:30.427957   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:32:30.437122   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:32:30.446157   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:32:30.446203   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:32:30.455480   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:32:30.464781   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:32:30.464834   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:32:30.474607   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:32:30.484537   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:32:30.484601   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:32:30.494170   67607 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:32:30.717349   67607 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:34:26.784436   67607 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 20:34:26.784518   67607 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0829 20:34:26.786158   67607 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 20:34:26.786196   67607 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:34:26.786276   67607 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:34:26.786353   67607 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:34:26.786437   67607 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 20:34:26.786486   67607 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:34:26.788271   67607 out.go:235]   - Generating certificates and keys ...
	I0829 20:34:26.788380   67607 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:34:26.788453   67607 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:34:26.788523   67607 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:34:26.788593   67607 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:34:26.788665   67607 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:34:26.788714   67607 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:34:26.788769   67607 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:34:26.788826   67607 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:34:26.788894   67607 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:34:26.788961   67607 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:34:26.788993   67607 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:34:26.789044   67607 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:34:26.789084   67607 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:34:26.789143   67607 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:34:26.789228   67607 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:34:26.789312   67607 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:34:26.789441   67607 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:34:26.789577   67607 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:34:26.789647   67607 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:34:26.789717   67607 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:34:26.791166   67607 out.go:235]   - Booting up control plane ...
	I0829 20:34:26.791239   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:34:26.791305   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:34:26.791382   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:34:26.791465   67607 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:34:26.791597   67607 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 20:34:26.791658   67607 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 20:34:26.791736   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.791926   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792008   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.792182   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792254   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.792435   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792492   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.792725   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792798   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.793026   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.793043   67607 kubeadm.go:310] 
	I0829 20:34:26.793091   67607 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 20:34:26.793148   67607 kubeadm.go:310] 		timed out waiting for the condition
	I0829 20:34:26.793159   67607 kubeadm.go:310] 
	I0829 20:34:26.793188   67607 kubeadm.go:310] 	This error is likely caused by:
	I0829 20:34:26.793219   67607 kubeadm.go:310] 		- The kubelet is not running
	I0829 20:34:26.793305   67607 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 20:34:26.793314   67607 kubeadm.go:310] 
	I0829 20:34:26.793438   67607 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 20:34:26.793483   67607 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 20:34:26.793515   67607 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 20:34:26.793522   67607 kubeadm.go:310] 
	I0829 20:34:26.793618   67607 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 20:34:26.793735   67607 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 20:34:26.793748   67607 kubeadm.go:310] 
	I0829 20:34:26.793895   67607 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 20:34:26.794020   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 20:34:26.794125   67607 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 20:34:26.794227   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 20:34:26.794285   67607 kubeadm.go:310] 
	I0829 20:34:26.794300   67607 kubeadm.go:394] duration metric: took 7m57.183485424s to StartCluster
	I0829 20:34:26.794357   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:34:26.794410   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:34:26.837033   67607 cri.go:89] found id: ""
	I0829 20:34:26.837072   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.837083   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:34:26.837091   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:34:26.837153   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:34:26.871177   67607 cri.go:89] found id: ""
	I0829 20:34:26.871203   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.871213   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:34:26.871220   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:34:26.871280   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:34:26.905409   67607 cri.go:89] found id: ""
	I0829 20:34:26.905432   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.905442   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:34:26.905450   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:34:26.905509   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:34:26.940119   67607 cri.go:89] found id: ""
	I0829 20:34:26.940150   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.940161   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:34:26.940169   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:34:26.940217   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:34:26.974555   67607 cri.go:89] found id: ""
	I0829 20:34:26.974589   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.974601   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:34:26.974608   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:34:26.974674   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:34:27.010586   67607 cri.go:89] found id: ""
	I0829 20:34:27.010616   67607 logs.go:276] 0 containers: []
	W0829 20:34:27.010631   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:34:27.010639   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:34:27.010704   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:34:27.044867   67607 cri.go:89] found id: ""
	I0829 20:34:27.044900   67607 logs.go:276] 0 containers: []
	W0829 20:34:27.044913   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:34:27.044921   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:34:27.044979   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:34:27.079282   67607 cri.go:89] found id: ""
	I0829 20:34:27.079308   67607 logs.go:276] 0 containers: []
	W0829 20:34:27.079316   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:34:27.079323   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:34:27.079335   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:34:27.093455   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:34:27.093485   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:34:27.179256   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:34:27.179280   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:34:27.179292   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:34:27.305873   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:34:27.305906   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:34:27.349676   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:34:27.349702   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 20:34:27.399787   67607 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0829 20:34:27.399851   67607 out.go:270] * 
	W0829 20:34:27.399907   67607 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 20:34:27.399919   67607 out.go:270] * 
	W0829 20:34:27.400631   67607 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 20:34:27.403773   67607 out.go:201] 
	W0829 20:34:27.404902   67607 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 20:34:27.404953   67607 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0829 20:34:27.404981   67607 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0829 20:34:27.406310   67607 out.go:201] 
	
	
	==> CRI-O <==
	Aug 29 20:39:45 embed-certs-388383 crio[709]: time="2024-08-29 20:39:45.988574096Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724963985988549156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7df0c9c4-99b5-4209-b53f-4d9d3c066edd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:39:45 embed-certs-388383 crio[709]: time="2024-08-29 20:39:45.989212318Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2c4228d-3eac-4c43-a112-fd7706dc25e5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:39:45 embed-certs-388383 crio[709]: time="2024-08-29 20:39:45.989324882Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2c4228d-3eac-4c43-a112-fd7706dc25e5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:39:45 embed-certs-388383 crio[709]: time="2024-08-29 20:39:45.989531045Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c,PodSandboxId:a810a1883fa2153dd0ea4d4610ac786f482173217642cb721e9002cd3067cb8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724963208530970511,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 021ca156-b7a8-4647-8efe-db17968fd5a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a10b28433da5bdb319544fbbc9449beb70470488d9e9a102a9ed6c411ba287,PodSandboxId:427a0e97e1cbd12443c2eccf76f1bbf66ba802409a795078ab329e50b2eef553,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724963185645627430,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfe9fc37-9a64-407f-a902-5c1930185329,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71,PodSandboxId:ae023455e0e752eefc42fe1c79d92263baddd8d005d5f884f1cbac804b34944f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963184073926038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dg6t6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e89b20-ebf4-4738-8ca7-9dc2a0e5653a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f,PodSandboxId:3ecac4e426a3fa7f318bd71d405df2ce85fdea202a3c2269a3cc3a1477b47195,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724963176556139708,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fcxs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649b40c8-4f4b-40d1-8
179-baf378d4c7d7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523,PodSandboxId:a810a1883fa2153dd0ea4d4610ac786f482173217642cb721e9002cd3067cb8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724963176569533845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 021ca156-b7a8-4647-8efe-db17968fd5
a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6,PodSandboxId:06254346ea7b63bf3f5e493c87303ff8466c5e760eb6be7a459739c8b6afcdea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724963172056670659,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec2716a5120d1ef3772dcd74efb323d,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334,PodSandboxId:d4eec8556a65326856948490a316f317e48b6432fc8183880a6beeea180729d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724963172045337704,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1415001102c1c7a568af0d1f29aa8cdf,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313,PodSandboxId:db6bfd18987e98a1215e5ccd4fc8a9e4cca3c49a71d7f0c6eee5e32d73e4ab8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724963172033169836,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38b8ff96a68e3d306887164202ee858,},Annotations:map[string]string{io.kubernetes.container.hash: f
72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd,PodSandboxId:d11262c649c5d3ad292919838dbc5b6b048d8c093d6923bebc7ae6a9bcbbe897,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724963172021833091,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293f45c954640c40483589dcd8cdc726,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2c4228d-3eac-4c43-a112-fd7706dc25e5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:39:46 embed-certs-388383 crio[709]: time="2024-08-29 20:39:46.027978825Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a7d314cc-bed9-41c9-ad25-547c49dbe13b name=/runtime.v1.RuntimeService/Version
	Aug 29 20:39:46 embed-certs-388383 crio[709]: time="2024-08-29 20:39:46.028050270Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a7d314cc-bed9-41c9-ad25-547c49dbe13b name=/runtime.v1.RuntimeService/Version
	Aug 29 20:39:46 embed-certs-388383 crio[709]: time="2024-08-29 20:39:46.029091825Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=62db7d29-126c-4759-805a-4cf3d6e17cbe name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:39:46 embed-certs-388383 crio[709]: time="2024-08-29 20:39:46.029637427Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724963986029614722,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=62db7d29-126c-4759-805a-4cf3d6e17cbe name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:39:46 embed-certs-388383 crio[709]: time="2024-08-29 20:39:46.030301921Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1fd6078a-3c2f-4059-91fa-f33e03614e42 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:39:46 embed-certs-388383 crio[709]: time="2024-08-29 20:39:46.030353310Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1fd6078a-3c2f-4059-91fa-f33e03614e42 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:39:46 embed-certs-388383 crio[709]: time="2024-08-29 20:39:46.030551634Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c,PodSandboxId:a810a1883fa2153dd0ea4d4610ac786f482173217642cb721e9002cd3067cb8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724963208530970511,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 021ca156-b7a8-4647-8efe-db17968fd5a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a10b28433da5bdb319544fbbc9449beb70470488d9e9a102a9ed6c411ba287,PodSandboxId:427a0e97e1cbd12443c2eccf76f1bbf66ba802409a795078ab329e50b2eef553,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724963185645627430,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfe9fc37-9a64-407f-a902-5c1930185329,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71,PodSandboxId:ae023455e0e752eefc42fe1c79d92263baddd8d005d5f884f1cbac804b34944f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963184073926038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dg6t6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e89b20-ebf4-4738-8ca7-9dc2a0e5653a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f,PodSandboxId:3ecac4e426a3fa7f318bd71d405df2ce85fdea202a3c2269a3cc3a1477b47195,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724963176556139708,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fcxs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649b40c8-4f4b-40d1-8
179-baf378d4c7d7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523,PodSandboxId:a810a1883fa2153dd0ea4d4610ac786f482173217642cb721e9002cd3067cb8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724963176569533845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 021ca156-b7a8-4647-8efe-db17968fd5
a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6,PodSandboxId:06254346ea7b63bf3f5e493c87303ff8466c5e760eb6be7a459739c8b6afcdea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724963172056670659,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec2716a5120d1ef3772dcd74efb323d,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334,PodSandboxId:d4eec8556a65326856948490a316f317e48b6432fc8183880a6beeea180729d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724963172045337704,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1415001102c1c7a568af0d1f29aa8cdf,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313,PodSandboxId:db6bfd18987e98a1215e5ccd4fc8a9e4cca3c49a71d7f0c6eee5e32d73e4ab8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724963172033169836,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38b8ff96a68e3d306887164202ee858,},Annotations:map[string]string{io.kubernetes.container.hash: f
72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd,PodSandboxId:d11262c649c5d3ad292919838dbc5b6b048d8c093d6923bebc7ae6a9bcbbe897,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724963172021833091,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293f45c954640c40483589dcd8cdc726,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1fd6078a-3c2f-4059-91fa-f33e03614e42 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:39:46 embed-certs-388383 crio[709]: time="2024-08-29 20:39:46.066850267Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ced82d37-3216-47ab-a816-ae90887af1f2 name=/runtime.v1.RuntimeService/Version
	Aug 29 20:39:46 embed-certs-388383 crio[709]: time="2024-08-29 20:39:46.066922899Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ced82d37-3216-47ab-a816-ae90887af1f2 name=/runtime.v1.RuntimeService/Version
	Aug 29 20:39:46 embed-certs-388383 crio[709]: time="2024-08-29 20:39:46.068222176Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=92e44004-7103-4132-aa77-8e521f18194d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:39:46 embed-certs-388383 crio[709]: time="2024-08-29 20:39:46.068689281Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724963986068667303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=92e44004-7103-4132-aa77-8e521f18194d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:39:46 embed-certs-388383 crio[709]: time="2024-08-29 20:39:46.069575909Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ccb517b-b6ed-4502-a3ed-211190021fa2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:39:46 embed-certs-388383 crio[709]: time="2024-08-29 20:39:46.069634716Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ccb517b-b6ed-4502-a3ed-211190021fa2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:39:46 embed-certs-388383 crio[709]: time="2024-08-29 20:39:46.069880013Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c,PodSandboxId:a810a1883fa2153dd0ea4d4610ac786f482173217642cb721e9002cd3067cb8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724963208530970511,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 021ca156-b7a8-4647-8efe-db17968fd5a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a10b28433da5bdb319544fbbc9449beb70470488d9e9a102a9ed6c411ba287,PodSandboxId:427a0e97e1cbd12443c2eccf76f1bbf66ba802409a795078ab329e50b2eef553,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724963185645627430,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfe9fc37-9a64-407f-a902-5c1930185329,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71,PodSandboxId:ae023455e0e752eefc42fe1c79d92263baddd8d005d5f884f1cbac804b34944f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963184073926038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dg6t6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e89b20-ebf4-4738-8ca7-9dc2a0e5653a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f,PodSandboxId:3ecac4e426a3fa7f318bd71d405df2ce85fdea202a3c2269a3cc3a1477b47195,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724963176556139708,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fcxs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649b40c8-4f4b-40d1-8
179-baf378d4c7d7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523,PodSandboxId:a810a1883fa2153dd0ea4d4610ac786f482173217642cb721e9002cd3067cb8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724963176569533845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 021ca156-b7a8-4647-8efe-db17968fd5
a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6,PodSandboxId:06254346ea7b63bf3f5e493c87303ff8466c5e760eb6be7a459739c8b6afcdea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724963172056670659,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec2716a5120d1ef3772dcd74efb323d,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334,PodSandboxId:d4eec8556a65326856948490a316f317e48b6432fc8183880a6beeea180729d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724963172045337704,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1415001102c1c7a568af0d1f29aa8cdf,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313,PodSandboxId:db6bfd18987e98a1215e5ccd4fc8a9e4cca3c49a71d7f0c6eee5e32d73e4ab8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724963172033169836,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38b8ff96a68e3d306887164202ee858,},Annotations:map[string]string{io.kubernetes.container.hash: f
72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd,PodSandboxId:d11262c649c5d3ad292919838dbc5b6b048d8c093d6923bebc7ae6a9bcbbe897,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724963172021833091,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293f45c954640c40483589dcd8cdc726,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5ccb517b-b6ed-4502-a3ed-211190021fa2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:39:46 embed-certs-388383 crio[709]: time="2024-08-29 20:39:46.111828635Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2f81b5db-73d6-4aa7-8333-6729ba248497 name=/runtime.v1.RuntimeService/Version
	Aug 29 20:39:46 embed-certs-388383 crio[709]: time="2024-08-29 20:39:46.111950845Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2f81b5db-73d6-4aa7-8333-6729ba248497 name=/runtime.v1.RuntimeService/Version
	Aug 29 20:39:46 embed-certs-388383 crio[709]: time="2024-08-29 20:39:46.113171368Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f0310f43-681e-4918-9030-ab6ff5062a92 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:39:46 embed-certs-388383 crio[709]: time="2024-08-29 20:39:46.113881089Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724963986113848593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f0310f43-681e-4918-9030-ab6ff5062a92 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:39:46 embed-certs-388383 crio[709]: time="2024-08-29 20:39:46.114568899Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f38fcce0-43ae-46e2-8ad5-cb13bd00abdf name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:39:46 embed-certs-388383 crio[709]: time="2024-08-29 20:39:46.114639859Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f38fcce0-43ae-46e2-8ad5-cb13bd00abdf name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:39:46 embed-certs-388383 crio[709]: time="2024-08-29 20:39:46.114903364Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c,PodSandboxId:a810a1883fa2153dd0ea4d4610ac786f482173217642cb721e9002cd3067cb8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724963208530970511,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 021ca156-b7a8-4647-8efe-db17968fd5a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a10b28433da5bdb319544fbbc9449beb70470488d9e9a102a9ed6c411ba287,PodSandboxId:427a0e97e1cbd12443c2eccf76f1bbf66ba802409a795078ab329e50b2eef553,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724963185645627430,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfe9fc37-9a64-407f-a902-5c1930185329,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71,PodSandboxId:ae023455e0e752eefc42fe1c79d92263baddd8d005d5f884f1cbac804b34944f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963184073926038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dg6t6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e89b20-ebf4-4738-8ca7-9dc2a0e5653a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f,PodSandboxId:3ecac4e426a3fa7f318bd71d405df2ce85fdea202a3c2269a3cc3a1477b47195,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724963176556139708,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fcxs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649b40c8-4f4b-40d1-8
179-baf378d4c7d7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523,PodSandboxId:a810a1883fa2153dd0ea4d4610ac786f482173217642cb721e9002cd3067cb8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724963176569533845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 021ca156-b7a8-4647-8efe-db17968fd5
a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6,PodSandboxId:06254346ea7b63bf3f5e493c87303ff8466c5e760eb6be7a459739c8b6afcdea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724963172056670659,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec2716a5120d1ef3772dcd74efb323d,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334,PodSandboxId:d4eec8556a65326856948490a316f317e48b6432fc8183880a6beeea180729d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724963172045337704,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1415001102c1c7a568af0d1f29aa8cdf,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313,PodSandboxId:db6bfd18987e98a1215e5ccd4fc8a9e4cca3c49a71d7f0c6eee5e32d73e4ab8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724963172033169836,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38b8ff96a68e3d306887164202ee858,},Annotations:map[string]string{io.kubernetes.container.hash: f
72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd,PodSandboxId:d11262c649c5d3ad292919838dbc5b6b048d8c093d6923bebc7ae6a9bcbbe897,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724963172021833091,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293f45c954640c40483589dcd8cdc726,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f38fcce0-43ae-46e2-8ad5-cb13bd00abdf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	668d380506744       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   a810a1883fa21       storage-provisioner
	f7a10b28433da       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   427a0e97e1cbd       busybox
	64cc61492bb7f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   ae023455e0e75       coredns-6f6b679f8f-dg6t6
	585208cde484f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   a810a1883fa21       storage-provisioner
	05148cf016224       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago      Running             kube-proxy                1                   3ecac4e426a3f       kube-proxy-fcxs4
	5ea75e14a71df       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   06254346ea7b6       etcd-embed-certs-388383
	daeb4a7c3dc70       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago      Running             kube-scheduler            1                   d4eec8556a653       kube-scheduler-embed-certs-388383
	f2c67cb1f348e       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      13 minutes ago      Running             kube-apiserver            1                   db6bfd18987e9       kube-apiserver-embed-certs-388383
	29d4eb837325f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      13 minutes ago      Running             kube-controller-manager   1                   d11262c649c5d       kube-controller-manager-embed-certs-388383
	
	
	==> coredns [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48834 - 59883 "HINFO IN 2135944862837064231.3484080705451116333. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032148635s
	
	
	==> describe nodes <==
	Name:               embed-certs-388383
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-388383
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=embed-certs-388383
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T20_17_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 20:17:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-388383
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 20:39:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 20:37:00 +0000   Thu, 29 Aug 2024 20:17:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 20:37:00 +0000   Thu, 29 Aug 2024 20:17:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 20:37:00 +0000   Thu, 29 Aug 2024 20:17:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 20:37:00 +0000   Thu, 29 Aug 2024 20:26:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.202
	  Hostname:    embed-certs-388383
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 398852be7d3640a4ab85ace9fbdd5515
	  System UUID:                398852be-7d36-40a4-ab85-ace9fbdd5515
	  Boot ID:                    90ecbdb3-55b2-4488-bb5c-67a64288f400
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-6f6b679f8f-dg6t6                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-embed-certs-388383                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-embed-certs-388383             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-embed-certs-388383    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-fcxs4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-embed-certs-388383             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-6867b74b74-mx5jh               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m                kubelet          Node embed-certs-388383 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node embed-certs-388383 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-388383 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node embed-certs-388383 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-388383 event: Registered Node embed-certs-388383 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-388383 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-388383 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-388383 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-388383 event: Registered Node embed-certs-388383 in Controller
	
	
	==> dmesg <==
	[Aug29 20:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050918] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040117] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.770046] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.547750] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.606634] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug29 20:26] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.061857] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055711] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.195616] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.127657] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.298931] systemd-fstab-generator[701]: Ignoring "noauto" option for root device
	[  +4.136583] systemd-fstab-generator[792]: Ignoring "noauto" option for root device
	[  +2.083941] systemd-fstab-generator[910]: Ignoring "noauto" option for root device
	[  +0.060494] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.522480] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.451771] systemd-fstab-generator[1549]: Ignoring "noauto" option for root device
	[  +4.332363] kauditd_printk_skb: 82 callbacks suppressed
	[ +25.297135] kauditd_printk_skb: 31 callbacks suppressed
	
	
	==> etcd [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6] <==
	{"level":"warn","ts":"2024-08-29T20:26:48.140629Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"930.216425ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-mx5jh\" ","response":"range_response_count:1 size:4342"}
	{"level":"warn","ts":"2024-08-29T20:26:48.140681Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"930.464102ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-6867b74b74-mx5jh.17f04cdfc7cf69e3\" ","response":"range_response_count:1 size:828"}
	{"level":"warn","ts":"2024-08-29T20:26:48.140710Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.194326199s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T20:26:48.145023Z","caller":"traceutil/trace.go:171","msg":"trace[1560337123] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:620; }","duration":"1.198639032s","start":"2024-08-29T20:26:46.946372Z","end":"2024-08-29T20:26:48.145011Z","steps":["trace[1560337123] 'agreement among raft nodes before linearized reading'  (duration: 1.194321091s)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T20:26:48.145332Z","caller":"traceutil/trace.go:171","msg":"trace[830044130] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:620; }","duration":"238.158903ms","start":"2024-08-29T20:26:47.907159Z","end":"2024-08-29T20:26:48.145318Z","steps":["trace[830044130] 'agreement among raft nodes before linearized reading'  (duration: 233.368844ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T20:26:48.145518Z","caller":"traceutil/trace.go:171","msg":"trace[1283446003] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-mx5jh; range_end:; response_count:1; response_revision:620; }","duration":"935.10141ms","start":"2024-08-29T20:26:47.210408Z","end":"2024-08-29T20:26:48.145509Z","steps":["trace[1283446003] 'agreement among raft nodes before linearized reading'  (duration: 930.175972ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T20:26:48.145961Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T20:26:47.210394Z","time spent":"935.553645ms","remote":"127.0.0.1:55348","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4366,"request content":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-mx5jh\" "}
	{"level":"info","ts":"2024-08-29T20:26:48.145537Z","caller":"traceutil/trace.go:171","msg":"trace[1708817671] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-6867b74b74-mx5jh.17f04cdfc7cf69e3; range_end:; response_count:1; response_revision:620; }","duration":"935.319995ms","start":"2024-08-29T20:26:47.210213Z","end":"2024-08-29T20:26:48.145533Z","steps":["trace[1708817671] 'agreement among raft nodes before linearized reading'  (duration: 930.425681ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T20:26:48.146307Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T20:26:47.210182Z","time spent":"936.115479ms","remote":"127.0.0.1:55254","response type":"/etcdserverpb.KV/Range","request count":0,"request size":79,"response count":1,"response size":852,"request content":"key:\"/registry/events/kube-system/metrics-server-6867b74b74-mx5jh.17f04cdfc7cf69e3\" "}
	{"level":"warn","ts":"2024-08-29T20:26:48.145583Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T20:26:46.678436Z","time spent":"1.467134289s","remote":"127.0.0.1:55344","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":5770,"request content":"key:\"/registry/minions/embed-certs-388383\" "}
	{"level":"info","ts":"2024-08-29T20:26:48.659503Z","caller":"traceutil/trace.go:171","msg":"trace[453438509] linearizableReadLoop","detail":"{readStateIndex:662; appliedIndex:661; }","duration":"510.155794ms","start":"2024-08-29T20:26:48.149327Z","end":"2024-08-29T20:26:48.659482Z","steps":["trace[453438509] 'read index received'  (duration: 412.189363ms)","trace[453438509] 'applied index is now lower than readState.Index'  (duration: 97.965212ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-29T20:26:48.659517Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T20:26:48.147571Z","time spent":"511.932748ms","remote":"127.0.0.1:55194","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-08-29T20:26:48.659652Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"510.304996ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T20:26:48.659690Z","caller":"traceutil/trace.go:171","msg":"trace[1426582622] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:620; }","duration":"510.354713ms","start":"2024-08-29T20:26:48.149324Z","end":"2024-08-29T20:26:48.659679Z","steps":["trace[1426582622] 'agreement among raft nodes before linearized reading'  (duration: 510.225366ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T20:26:48.659717Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T20:26:48.149299Z","time spent":"510.410674ms","remote":"127.0.0.1:55170","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-08-29T20:26:48.663733Z","caller":"traceutil/trace.go:171","msg":"trace[100398898] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"514.252712ms","start":"2024-08-29T20:26:48.149460Z","end":"2024-08-29T20:26:48.663713Z","steps":["trace[100398898] 'process raft request'  (duration: 513.838939ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T20:26:48.663774Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"513.43914ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-mx5jh\" ","response":"range_response_count:1 size:4386"}
	{"level":"info","ts":"2024-08-29T20:26:48.663807Z","caller":"traceutil/trace.go:171","msg":"trace[1582031213] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-mx5jh; range_end:; response_count:1; response_revision:622; }","duration":"513.476746ms","start":"2024-08-29T20:26:48.150320Z","end":"2024-08-29T20:26:48.663797Z","steps":["trace[1582031213] 'agreement among raft nodes before linearized reading'  (duration: 513.394667ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T20:26:48.663834Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T20:26:48.149446Z","time spent":"514.324308ms","remote":"127.0.0.1:55254","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":813,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-6867b74b74-mx5jh.17f04cdfc7cf69e3\" mod_revision:576 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-6867b74b74-mx5jh.17f04cdfc7cf69e3\" value_size:718 lease:8048937074636695199 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-6867b74b74-mx5jh.17f04cdfc7cf69e3\" > >"}
	{"level":"warn","ts":"2024-08-29T20:26:48.663843Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T20:26:48.150225Z","time spent":"513.603255ms","remote":"127.0.0.1:55348","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4410,"request content":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-mx5jh\" "}
	{"level":"info","ts":"2024-08-29T20:26:48.664009Z","caller":"traceutil/trace.go:171","msg":"trace[1022135091] transaction","detail":"{read_only:false; response_revision:622; number_of_response:1; }","duration":"511.933178ms","start":"2024-08-29T20:26:48.152069Z","end":"2024-08-29T20:26:48.664002Z","steps":["trace[1022135091] 'process raft request'  (duration: 511.589838ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T20:26:48.664050Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T20:26:48.152059Z","time spent":"511.964162ms","remote":"127.0.0.1:55348","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4371,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-6867b74b74-mx5jh\" mod_revision:612 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-6867b74b74-mx5jh\" value_size:4305 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-6867b74b74-mx5jh\" > >"}
	{"level":"info","ts":"2024-08-29T20:36:14.482702Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":854}
	{"level":"info","ts":"2024-08-29T20:36:14.492737Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":854,"took":"9.747755ms","hash":373481181,"current-db-size-bytes":2650112,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2650112,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-08-29T20:36:14.492794Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":373481181,"revision":854,"compact-revision":-1}
	
	
	==> kernel <==
	 20:39:46 up 14 min,  0 users,  load average: 0.04, 0.10, 0.08
	Linux embed-certs-388383 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313] <==
	E0829 20:36:16.715553       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0829 20:36:16.715495       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0829 20:36:16.716723       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 20:36:16.716751       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0829 20:37:16.716931       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 20:37:16.717193       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0829 20:37:16.716982       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 20:37:16.717328       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0829 20:37:16.718492       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 20:37:16.718537       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0829 20:39:16.719595       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 20:39:16.719750       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0829 20:39:16.719794       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 20:39:16.719811       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0829 20:39:16.720967       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 20:39:16.721021       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd] <==
	E0829 20:34:19.397141       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:34:19.820130       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:34:49.403342       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:34:49.827834       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:35:19.410728       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:35:19.835587       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:35:49.418119       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:35:49.843873       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:36:19.424379       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:36:19.851338       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:36:49.431143       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:36:49.859395       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0829 20:37:00.646064       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-388383"
	I0829 20:37:11.222975       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="285.737µs"
	E0829 20:37:19.437550       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:37:19.866106       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0829 20:37:25.228772       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="151.237µs"
	E0829 20:37:49.444086       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:37:49.874423       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:38:19.451454       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:38:19.881795       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:38:49.457905       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:38:49.890033       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:39:19.464194       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:39:19.897569       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 20:26:16.925862       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 20:26:16.933452       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.202"]
	E0829 20:26:16.933527       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 20:26:16.974796       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 20:26:16.974838       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 20:26:16.974868       1 server_linux.go:169] "Using iptables Proxier"
	I0829 20:26:16.978543       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 20:26:16.978800       1 server.go:483] "Version info" version="v1.31.0"
	I0829 20:26:16.978828       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 20:26:16.980316       1 config.go:197] "Starting service config controller"
	I0829 20:26:16.980357       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 20:26:16.980382       1 config.go:104] "Starting endpoint slice config controller"
	I0829 20:26:16.980386       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 20:26:16.980902       1 config.go:326] "Starting node config controller"
	I0829 20:26:16.980937       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 20:26:17.080883       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 20:26:17.080992       1 shared_informer.go:320] Caches are synced for service config
	I0829 20:26:17.081055       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334] <==
	I0829 20:26:13.344103       1 serving.go:386] Generated self-signed cert in-memory
	W0829 20:26:15.688140       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0829 20:26:15.688329       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0829 20:26:15.688360       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0829 20:26:15.688447       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0829 20:26:15.731810       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0829 20:26:15.732169       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 20:26:15.734472       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0829 20:26:15.734562       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0829 20:26:15.734627       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0829 20:26:15.734704       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0829 20:26:15.835017       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 20:38:36 embed-certs-388383 kubelet[917]: E0829 20:38:36.208173     917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mx5jh" podUID="99e21acd-b7b8-4e6f-8c75-c112206aed89"
	Aug 29 20:38:40 embed-certs-388383 kubelet[917]: E0829 20:38:40.370206     917 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724963920369757408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:38:40 embed-certs-388383 kubelet[917]: E0829 20:38:40.370764     917 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724963920369757408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:38:47 embed-certs-388383 kubelet[917]: E0829 20:38:47.207964     917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mx5jh" podUID="99e21acd-b7b8-4e6f-8c75-c112206aed89"
	Aug 29 20:38:50 embed-certs-388383 kubelet[917]: E0829 20:38:50.376616     917 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724963930375742520,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:38:50 embed-certs-388383 kubelet[917]: E0829 20:38:50.376675     917 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724963930375742520,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:38:58 embed-certs-388383 kubelet[917]: E0829 20:38:58.208505     917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mx5jh" podUID="99e21acd-b7b8-4e6f-8c75-c112206aed89"
	Aug 29 20:39:00 embed-certs-388383 kubelet[917]: E0829 20:39:00.378360     917 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724963940378009315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:39:00 embed-certs-388383 kubelet[917]: E0829 20:39:00.378393     917 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724963940378009315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:39:10 embed-certs-388383 kubelet[917]: E0829 20:39:10.222399     917 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 20:39:10 embed-certs-388383 kubelet[917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 20:39:10 embed-certs-388383 kubelet[917]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 20:39:10 embed-certs-388383 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 20:39:10 embed-certs-388383 kubelet[917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 20:39:10 embed-certs-388383 kubelet[917]: E0829 20:39:10.379781     917 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724963950379460662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:39:10 embed-certs-388383 kubelet[917]: E0829 20:39:10.379844     917 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724963950379460662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:39:13 embed-certs-388383 kubelet[917]: E0829 20:39:13.207391     917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mx5jh" podUID="99e21acd-b7b8-4e6f-8c75-c112206aed89"
	Aug 29 20:39:20 embed-certs-388383 kubelet[917]: E0829 20:39:20.381893     917 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724963960381426829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:39:20 embed-certs-388383 kubelet[917]: E0829 20:39:20.382331     917 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724963960381426829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:39:26 embed-certs-388383 kubelet[917]: E0829 20:39:26.208738     917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mx5jh" podUID="99e21acd-b7b8-4e6f-8c75-c112206aed89"
	Aug 29 20:39:30 embed-certs-388383 kubelet[917]: E0829 20:39:30.385533     917 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724963970385095046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:39:30 embed-certs-388383 kubelet[917]: E0829 20:39:30.385577     917 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724963970385095046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:39:40 embed-certs-388383 kubelet[917]: E0829 20:39:40.387371     917 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724963980386911307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:39:40 embed-certs-388383 kubelet[917]: E0829 20:39:40.387681     917 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724963980386911307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:39:41 embed-certs-388383 kubelet[917]: E0829 20:39:41.207549     917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mx5jh" podUID="99e21acd-b7b8-4e6f-8c75-c112206aed89"
	
	
	==> storage-provisioner [585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523] <==
	I0829 20:26:16.786895       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0829 20:26:46.801383       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c] <==
	I0829 20:26:48.730744       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 20:26:48.748700       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 20:26:48.748852       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 20:27:06.157398       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 20:27:06.157737       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-388383_05829032-f530-4263-8f6f-0a3f3f283ef4!
	I0829 20:27:06.161462       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"734cb345-acaf-4d89-995f-0550044e7554", APIVersion:"v1", ResourceVersion:"637", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-388383_05829032-f530-4263-8f6f-0a3f3f283ef4 became leader
	I0829 20:27:06.258390       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-388383_05829032-f530-4263-8f6f-0a3f3f283ef4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-388383 -n embed-certs-388383
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-388383 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-mx5jh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-388383 describe pod metrics-server-6867b74b74-mx5jh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-388383 describe pod metrics-server-6867b74b74-mx5jh: exit status 1 (62.05773ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-mx5jh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-388383 describe pod metrics-server-6867b74b74-mx5jh: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-145096 -n default-k8s-diff-port-145096
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-29 20:40:56.540414251 +0000 UTC m=+6320.710875527
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-145096 -n default-k8s-diff-port-145096
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-145096 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-145096 logs -n 25: (2.141425737s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-397724                                   | no-preload-397724            | jenkins | v1.33.1 | 29 Aug 24 20:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-388383            | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:18 UTC | 29 Aug 24 20:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-388383                                  | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-695305             | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:19 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-695305                  | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-695305 --memory=2200 --alsologtostderr   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:20 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-695305 image list                           | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	| delete  | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	| start   | -p                                                     | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:21 UTC |
	|         | default-k8s-diff-port-145096                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-032002        | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-397724                  | no-preload-397724            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-397724                                   | no-preload-397724            | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC | 29 Aug 24 20:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-388383                 | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-388383                                  | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC | 29 Aug 24 20:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-145096  | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC | 29 Aug 24 20:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC |                     |
	|         | default-k8s-diff-port-145096                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-032002                              | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:22 UTC | 29 Aug 24 20:22 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-032002             | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:22 UTC | 29 Aug 24 20:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-032002                              | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:22 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-145096       | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:24 UTC | 29 Aug 24 20:31 UTC |
	|         | default-k8s-diff-port-145096                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 20:24:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 20:24:16.618808   68084 out.go:345] Setting OutFile to fd 1 ...
	I0829 20:24:16.619043   68084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:24:16.619051   68084 out.go:358] Setting ErrFile to fd 2...
	I0829 20:24:16.619055   68084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:24:16.619206   68084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 20:24:16.619741   68084 out.go:352] Setting JSON to false
	I0829 20:24:16.620649   68084 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7604,"bootTime":1724955453,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 20:24:16.620702   68084 start.go:139] virtualization: kvm guest
	I0829 20:24:16.622891   68084 out.go:177] * [default-k8s-diff-port-145096] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 20:24:16.624228   68084 out.go:177]   - MINIKUBE_LOCATION=19530
	I0829 20:24:16.624256   68084 notify.go:220] Checking for updates...
	I0829 20:24:16.627123   68084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 20:24:16.628611   68084 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:24:16.629858   68084 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 20:24:16.631013   68084 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 20:24:16.632116   68084 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 20:24:16.633630   68084 config.go:182] Loaded profile config "default-k8s-diff-port-145096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:24:16.634042   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:24:16.634080   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:24:16.648879   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36381
	I0829 20:24:16.649315   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:24:16.649875   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:24:16.649893   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:24:16.650274   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:24:16.650504   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:24:16.650776   68084 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 20:24:16.651053   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:24:16.651111   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:24:16.665964   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33615
	I0829 20:24:16.666402   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:24:16.666918   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:24:16.666937   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:24:16.667250   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:24:16.667435   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:24:16.698712   68084 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 20:24:16.700010   68084 start.go:297] selected driver: kvm2
	I0829 20:24:16.700023   68084 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-145096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:24:16.700131   68084 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 20:24:16.700915   68084 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 20:24:16.700998   68084 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19530-11185/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 20:24:16.715940   68084 install.go:137] /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0829 20:24:16.716321   68084 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:24:16.716388   68084 cni.go:84] Creating CNI manager for ""
	I0829 20:24:16.716405   68084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:24:16.716452   68084 start.go:340] cluster config:
	{Name:default-k8s-diff-port-145096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:24:16.716563   68084 iso.go:125] acquiring lock: {Name:mk1c9d3ac7f423dd4657884e37bdf4359f6328d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 20:24:16.718175   68084 out.go:177] * Starting "default-k8s-diff-port-145096" primary control-plane node in "default-k8s-diff-port-145096" cluster
	I0829 20:24:16.258820   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:16.719204   68084 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:24:16.719231   68084 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 20:24:16.719237   68084 cache.go:56] Caching tarball of preloaded images
	I0829 20:24:16.719296   68084 preload.go:172] Found /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 20:24:16.719305   68084 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 20:24:16.719385   68084 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/config.json ...
	I0829 20:24:16.719549   68084 start.go:360] acquireMachinesLock for default-k8s-diff-port-145096: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 20:24:22.338805   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:25.410778   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:31.490844   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:34.562885   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:40.642793   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:43.714939   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:49.794765   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:52.866858   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:58.946771   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:02.018832   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:08.098829   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:11.170833   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:17.250794   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:20.322926   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:26.402827   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:29.474844   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:35.554771   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:38.626850   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:41.630257   66989 start.go:364] duration metric: took 4m26.950412835s to acquireMachinesLock for "embed-certs-388383"
	I0829 20:25:41.630308   66989 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:25:41.630316   66989 fix.go:54] fixHost starting: 
	I0829 20:25:41.630791   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:25:41.630828   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:25:41.646005   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32873
	I0829 20:25:41.646405   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:25:41.646932   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:25:41.646959   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:25:41.647308   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:25:41.647525   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:25:41.647686   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:25:41.649457   66989 fix.go:112] recreateIfNeeded on embed-certs-388383: state=Stopped err=<nil>
	I0829 20:25:41.649491   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	W0829 20:25:41.649639   66989 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:25:41.651109   66989 out.go:177] * Restarting existing kvm2 VM for "embed-certs-388383" ...
	I0829 20:25:41.627651   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:25:41.627705   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:25:41.628067   66841 buildroot.go:166] provisioning hostname "no-preload-397724"
	I0829 20:25:41.628089   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:25:41.628259   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:25:41.630106   66841 machine.go:96] duration metric: took 4m35.46951337s to provisionDockerMachine
	I0829 20:25:41.630148   66841 fix.go:56] duration metric: took 4m35.494271139s for fixHost
	I0829 20:25:41.630159   66841 start.go:83] releasing machines lock for "no-preload-397724", held for 4m35.494325078s
	W0829 20:25:41.630182   66841 start.go:714] error starting host: provision: host is not running
	W0829 20:25:41.630284   66841 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0829 20:25:41.630295   66841 start.go:729] Will try again in 5 seconds ...
	I0829 20:25:41.652159   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Start
	I0829 20:25:41.652318   66989 main.go:141] libmachine: (embed-certs-388383) Ensuring networks are active...
	I0829 20:25:41.653011   66989 main.go:141] libmachine: (embed-certs-388383) Ensuring network default is active
	I0829 20:25:41.653426   66989 main.go:141] libmachine: (embed-certs-388383) Ensuring network mk-embed-certs-388383 is active
	I0829 20:25:41.653824   66989 main.go:141] libmachine: (embed-certs-388383) Getting domain xml...
	I0829 20:25:41.654765   66989 main.go:141] libmachine: (embed-certs-388383) Creating domain...
	I0829 20:25:42.860512   66989 main.go:141] libmachine: (embed-certs-388383) Waiting to get IP...
	I0829 20:25:42.861297   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:42.861661   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:42.861739   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:42.861649   68412 retry.go:31] will retry after 207.172422ms: waiting for machine to come up
	I0829 20:25:43.070026   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:43.070414   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:43.070445   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:43.070368   68412 retry.go:31] will retry after 336.815982ms: waiting for machine to come up
	I0829 20:25:43.408817   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:43.409144   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:43.409182   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:43.409117   68412 retry.go:31] will retry after 330.159156ms: waiting for machine to come up
	I0829 20:25:43.740518   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:43.741039   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:43.741065   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:43.741002   68412 retry.go:31] will retry after 528.906592ms: waiting for machine to come up
	I0829 20:25:44.271695   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:44.272286   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:44.272344   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:44.272280   68412 retry.go:31] will retry after 616.92568ms: waiting for machine to come up
	I0829 20:25:46.631383   66841 start.go:360] acquireMachinesLock for no-preload-397724: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 20:25:44.891133   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:44.891535   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:44.891566   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:44.891499   68412 retry.go:31] will retry after 907.330558ms: waiting for machine to come up
	I0829 20:25:45.800480   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:45.800858   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:45.800885   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:45.800840   68412 retry.go:31] will retry after 1.189775318s: waiting for machine to come up
	I0829 20:25:46.992687   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:46.993155   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:46.993189   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:46.993142   68412 retry.go:31] will retry after 1.467244635s: waiting for machine to come up
	I0829 20:25:48.462770   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:48.463201   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:48.463226   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:48.463173   68412 retry.go:31] will retry after 1.602764839s: waiting for machine to come up
	I0829 20:25:50.067082   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:50.067608   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:50.067638   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:50.067543   68412 retry.go:31] will retry after 1.562244323s: waiting for machine to come up
	I0829 20:25:51.632201   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:51.632705   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:51.632731   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:51.632650   68412 retry.go:31] will retry after 1.747220365s: waiting for machine to come up
	I0829 20:25:53.382010   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:53.382463   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:53.382527   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:53.382454   68412 retry.go:31] will retry after 3.446054845s: waiting for machine to come up
	I0829 20:25:56.830511   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:56.830954   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:56.830988   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:56.830908   68412 retry.go:31] will retry after 4.53995219s: waiting for machine to come up
	I0829 20:26:02.603329   67607 start.go:364] duration metric: took 3m23.680319578s to acquireMachinesLock for "old-k8s-version-032002"
	I0829 20:26:02.603393   67607 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:26:02.603404   67607 fix.go:54] fixHost starting: 
	I0829 20:26:02.603837   67607 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:02.603884   67607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:02.621398   67607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35977
	I0829 20:26:02.621840   67607 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:02.622425   67607 main.go:141] libmachine: Using API Version  1
	I0829 20:26:02.622460   67607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:02.622810   67607 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:02.623040   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:02.623201   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetState
	I0829 20:26:02.624854   67607 fix.go:112] recreateIfNeeded on old-k8s-version-032002: state=Stopped err=<nil>
	I0829 20:26:02.624880   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	W0829 20:26:02.625020   67607 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:26:02.627161   67607 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-032002" ...
	I0829 20:26:02.628419   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .Start
	I0829 20:26:02.628578   67607 main.go:141] libmachine: (old-k8s-version-032002) Ensuring networks are active...
	I0829 20:26:02.629339   67607 main.go:141] libmachine: (old-k8s-version-032002) Ensuring network default is active
	I0829 20:26:02.629732   67607 main.go:141] libmachine: (old-k8s-version-032002) Ensuring network mk-old-k8s-version-032002 is active
	I0829 20:26:02.630188   67607 main.go:141] libmachine: (old-k8s-version-032002) Getting domain xml...
	I0829 20:26:02.630924   67607 main.go:141] libmachine: (old-k8s-version-032002) Creating domain...
	I0829 20:26:01.375542   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.375928   66989 main.go:141] libmachine: (embed-certs-388383) Found IP for machine: 192.168.61.202
	I0829 20:26:01.375951   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has current primary IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.375974   66989 main.go:141] libmachine: (embed-certs-388383) Reserving static IP address...
	I0829 20:26:01.376364   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "embed-certs-388383", mac: "52:54:00:6c:5a:0c", ip: "192.168.61.202"} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.376398   66989 main.go:141] libmachine: (embed-certs-388383) DBG | skip adding static IP to network mk-embed-certs-388383 - found existing host DHCP lease matching {name: "embed-certs-388383", mac: "52:54:00:6c:5a:0c", ip: "192.168.61.202"}
	I0829 20:26:01.376411   66989 main.go:141] libmachine: (embed-certs-388383) Reserved static IP address: 192.168.61.202
	I0829 20:26:01.376428   66989 main.go:141] libmachine: (embed-certs-388383) Waiting for SSH to be available...
	I0829 20:26:01.376445   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Getting to WaitForSSH function...
	I0829 20:26:01.378600   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.378899   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.378937   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.379065   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Using SSH client type: external
	I0829 20:26:01.379088   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa (-rw-------)
	I0829 20:26:01.379118   66989 main.go:141] libmachine: (embed-certs-388383) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:01.379132   66989 main.go:141] libmachine: (embed-certs-388383) DBG | About to run SSH command:
	I0829 20:26:01.379141   66989 main.go:141] libmachine: (embed-certs-388383) DBG | exit 0
	I0829 20:26:01.498736   66989 main.go:141] libmachine: (embed-certs-388383) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:01.499103   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetConfigRaw
	I0829 20:26:01.499700   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetIP
	I0829 20:26:01.502022   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.502332   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.502362   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.502586   66989 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/config.json ...
	I0829 20:26:01.502778   66989 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:01.502795   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:01.502980   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.505156   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.505452   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.505473   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.505590   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:01.505739   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.505902   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.506038   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:01.506183   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:01.506366   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:01.506376   66989 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:01.602691   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:01.602721   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetMachineName
	I0829 20:26:01.603002   66989 buildroot.go:166] provisioning hostname "embed-certs-388383"
	I0829 20:26:01.603033   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetMachineName
	I0829 20:26:01.603232   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.605841   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.606170   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.606201   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.606333   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:01.606505   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.606672   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.606786   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:01.606950   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:01.607121   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:01.607144   66989 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-388383 && echo "embed-certs-388383" | sudo tee /etc/hostname
	I0829 20:26:01.717669   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-388383
	
	I0829 20:26:01.717709   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.720400   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.720705   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.720733   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.720863   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:01.721097   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.721280   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.721446   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:01.721585   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:01.721811   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:01.721842   66989 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-388383' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-388383/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-388383' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:01.827800   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:01.827835   66989 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:01.827869   66989 buildroot.go:174] setting up certificates
	I0829 20:26:01.827882   66989 provision.go:84] configureAuth start
	I0829 20:26:01.827894   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetMachineName
	I0829 20:26:01.828214   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetIP
	I0829 20:26:01.830619   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.831150   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.831184   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.831339   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.833642   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.833961   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.833987   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.834161   66989 provision.go:143] copyHostCerts
	I0829 20:26:01.834217   66989 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:01.834241   66989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:01.834322   66989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:01.834445   66989 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:01.834457   66989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:01.834491   66989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:01.834608   66989 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:01.834621   66989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:01.834660   66989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:01.834726   66989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.embed-certs-388383 san=[127.0.0.1 192.168.61.202 embed-certs-388383 localhost minikube]
	I0829 20:26:01.992735   66989 provision.go:177] copyRemoteCerts
	I0829 20:26:01.992794   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:01.992819   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.995463   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.995835   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.995862   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.996006   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:01.996179   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.996333   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:01.996460   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:02.077017   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:02.105498   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0829 20:26:02.133974   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 20:26:02.161330   66989 provision.go:87] duration metric: took 333.435119ms to configureAuth
	I0829 20:26:02.161362   66989 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:02.161579   66989 config.go:182] Loaded profile config "embed-certs-388383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:26:02.161707   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.164373   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.164696   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.164724   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.164909   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.165111   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.165276   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.165402   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.165535   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:02.165697   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:02.165711   66989 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:02.377994   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:02.378022   66989 machine.go:96] duration metric: took 875.231112ms to provisionDockerMachine
	I0829 20:26:02.378037   66989 start.go:293] postStartSetup for "embed-certs-388383" (driver="kvm2")
	I0829 20:26:02.378053   66989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:02.378078   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.378404   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:02.378432   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.380920   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.381329   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.381358   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.381564   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.381797   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.381975   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.382124   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:02.461053   66989 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:02.465391   66989 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:02.465417   66989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:02.465479   66989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:02.465550   66989 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:02.465635   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:02.474909   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:02.500025   66989 start.go:296] duration metric: took 121.973853ms for postStartSetup
	I0829 20:26:02.500064   66989 fix.go:56] duration metric: took 20.86974885s for fixHost
	I0829 20:26:02.500082   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.502976   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.503380   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.503411   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.503599   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.503808   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.503976   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.504126   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.504283   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:02.504459   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:02.504469   66989 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:02.603161   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963162.568310162
	
	I0829 20:26:02.603181   66989 fix.go:216] guest clock: 1724963162.568310162
	I0829 20:26:02.603187   66989 fix.go:229] Guest: 2024-08-29 20:26:02.568310162 +0000 UTC Remote: 2024-08-29 20:26:02.500067292 +0000 UTC m=+288.185978445 (delta=68.24287ms)
	I0829 20:26:02.603210   66989 fix.go:200] guest clock delta is within tolerance: 68.24287ms
	I0829 20:26:02.603216   66989 start.go:83] releasing machines lock for "embed-certs-388383", held for 20.972921408s
	I0829 20:26:02.603248   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.603532   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetIP
	I0829 20:26:02.606426   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.606804   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.606834   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.607021   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.607527   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.607694   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.607770   66989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:02.607809   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.607878   66989 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:02.607896   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.610239   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.610264   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.610657   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.610685   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.610723   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.610742   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.610844   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.611014   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.611014   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.611145   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.611208   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.611268   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.611341   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:02.611399   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:02.712435   66989 ssh_runner.go:195] Run: systemctl --version
	I0829 20:26:02.718614   66989 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:26:02.865138   66989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:26:02.871510   66989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:26:02.871593   66989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:26:02.887316   66989 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:26:02.887340   66989 start.go:495] detecting cgroup driver to use...
	I0829 20:26:02.887394   66989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:26:02.905024   66989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:26:02.918922   66989 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:26:02.918986   66989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:26:02.932660   66989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:26:02.946679   66989 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:26:03.056273   66989 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:26:03.216885   66989 docker.go:233] disabling docker service ...
	I0829 20:26:03.216959   66989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:26:03.231363   66989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:26:03.245609   66989 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:26:03.368087   66989 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:26:03.493947   66989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:26:03.508803   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:26:03.527542   66989 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 20:26:03.527607   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.538301   66989 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:26:03.538370   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.549672   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.562203   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.573572   66989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:26:03.585031   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.596778   66989 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.619405   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.630337   66989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:26:03.640492   66989 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:26:03.640568   66989 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:26:03.657931   66989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:26:03.673756   66989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:03.792856   66989 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:26:03.880493   66989 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:26:03.880551   66989 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:26:03.885793   66989 start.go:563] Will wait 60s for crictl version
	I0829 20:26:03.885850   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:26:03.889835   66989 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:26:03.928633   66989 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:26:03.928702   66989 ssh_runner.go:195] Run: crio --version
	I0829 20:26:03.958861   66989 ssh_runner.go:195] Run: crio --version
	I0829 20:26:03.987724   66989 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 20:26:03.989009   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetIP
	I0829 20:26:03.991889   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:03.992308   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:03.992334   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:03.992567   66989 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0829 20:26:03.996945   66989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:04.009353   66989 kubeadm.go:883] updating cluster {Name:embed-certs-388383 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-388383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:26:04.009462   66989 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:26:04.009501   66989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:04.051583   66989 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 20:26:04.051643   66989 ssh_runner.go:195] Run: which lz4
	I0829 20:26:04.055929   66989 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 20:26:04.060214   66989 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 20:26:04.060240   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 20:26:03.867691   67607 main.go:141] libmachine: (old-k8s-version-032002) Waiting to get IP...
	I0829 20:26:03.868798   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:03.869246   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:03.869318   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:03.869235   68552 retry.go:31] will retry after 220.928648ms: waiting for machine to come up
	I0829 20:26:04.091675   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:04.092057   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:04.092084   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:04.092020   68552 retry.go:31] will retry after 352.781755ms: waiting for machine to come up
	I0829 20:26:04.446766   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:04.447277   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:04.447301   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:04.447224   68552 retry.go:31] will retry after 480.96031ms: waiting for machine to come up
	I0829 20:26:04.929561   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:04.930149   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:04.930181   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:04.930051   68552 retry.go:31] will retry after 415.057247ms: waiting for machine to come up
	I0829 20:26:05.346757   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:05.347224   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:05.347258   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:05.347196   68552 retry.go:31] will retry after 609.958508ms: waiting for machine to come up
	I0829 20:26:05.959227   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:05.959774   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:05.959825   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:05.959702   68552 retry.go:31] will retry after 680.801337ms: waiting for machine to come up
	I0829 20:26:06.642811   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:06.643312   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:06.643343   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:06.643269   68552 retry.go:31] will retry after 995.561322ms: waiting for machine to come up
	I0829 20:26:07.640147   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:07.640617   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:07.640652   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:07.640588   68552 retry.go:31] will retry after 1.22436043s: waiting for machine to come up
	I0829 20:26:05.472272   66989 crio.go:462] duration metric: took 1.416373513s to copy over tarball
	I0829 20:26:05.472355   66989 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 20:26:07.583560   66989 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.111164398s)
	I0829 20:26:07.583595   66989 crio.go:469] duration metric: took 2.111297179s to extract the tarball
	I0829 20:26:07.583605   66989 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 20:26:07.622447   66989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:07.671704   66989 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 20:26:07.671732   66989 cache_images.go:84] Images are preloaded, skipping loading
	I0829 20:26:07.671742   66989 kubeadm.go:934] updating node { 192.168.61.202 8443 v1.31.0 crio true true} ...
	I0829 20:26:07.671869   66989 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-388383 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-388383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:26:07.671958   66989 ssh_runner.go:195] Run: crio config
	I0829 20:26:07.717217   66989 cni.go:84] Creating CNI manager for ""
	I0829 20:26:07.717242   66989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:07.717263   66989 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:26:07.717290   66989 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.202 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-388383 NodeName:embed-certs-388383 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 20:26:07.717465   66989 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-388383"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:26:07.717549   66989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 20:26:07.727174   66989 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:26:07.727258   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:26:07.736512   66989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0829 20:26:07.752727   66989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:26:07.772430   66989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0829 20:26:07.793343   66989 ssh_runner.go:195] Run: grep 192.168.61.202	control-plane.minikube.internal$ /etc/hosts
	I0829 20:26:07.798214   66989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:07.811285   66989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:07.927025   66989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:07.943741   66989 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383 for IP: 192.168.61.202
	I0829 20:26:07.943765   66989 certs.go:194] generating shared ca certs ...
	I0829 20:26:07.943784   66989 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:07.943984   66989 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:26:07.944047   66989 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:26:07.944061   66989 certs.go:256] generating profile certs ...
	I0829 20:26:07.944177   66989 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/client.key
	I0829 20:26:07.944254   66989 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/apiserver.key.03b29390
	I0829 20:26:07.944317   66989 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/proxy-client.key
	I0829 20:26:07.944494   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:26:07.944538   66989 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:26:07.944551   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:26:07.944581   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:26:07.944605   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:26:07.944628   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:26:07.944670   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:07.945252   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:26:07.971277   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:26:08.012892   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:26:08.042038   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:26:08.067708   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0829 20:26:08.095930   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 20:26:08.127171   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:26:08.151287   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 20:26:08.175525   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:26:08.199076   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:26:08.222783   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:26:08.245783   66989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:26:08.261839   66989 ssh_runner.go:195] Run: openssl version
	I0829 20:26:08.267545   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:26:08.278347   66989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:26:08.284232   66989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:26:08.284283   66989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:26:08.292024   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:26:08.306831   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:26:08.320607   66989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:26:08.325027   66989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:26:08.325070   66989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:26:08.330808   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:26:08.341457   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:26:08.352323   66989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:08.356822   66989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:08.356891   66989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:08.362617   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:26:08.373755   66989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:26:08.378153   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:26:08.384225   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:26:08.390136   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:26:08.396002   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:26:08.401713   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:26:08.407437   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:26:08.413033   66989 kubeadm.go:392] StartCluster: {Name:embed-certs-388383 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-388383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:26:08.413119   66989 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:26:08.413173   66989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:08.450685   66989 cri.go:89] found id: ""
	I0829 20:26:08.450757   66989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:26:08.460787   66989 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:26:08.460809   66989 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:26:08.460853   66989 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:26:08.470179   66989 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:26:08.471673   66989 kubeconfig.go:125] found "embed-certs-388383" server: "https://192.168.61.202:8443"
	I0829 20:26:08.474839   66989 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:26:08.483951   66989 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.202
	I0829 20:26:08.483992   66989 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:26:08.484007   66989 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:26:08.484085   66989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:08.525947   66989 cri.go:89] found id: ""
	I0829 20:26:08.526013   66989 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:26:08.541862   66989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:26:08.551179   66989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:26:08.551200   66989 kubeadm.go:157] found existing configuration files:
	
	I0829 20:26:08.551249   66989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:26:08.559897   66989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:26:08.559970   66989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:26:08.569317   66989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:26:08.577858   66989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:26:08.577905   66989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:26:08.587113   66989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:26:08.595645   66989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:26:08.595705   66989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:26:08.604803   66989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:26:08.613070   66989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:26:08.613125   66989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:26:08.622037   66989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:26:08.631330   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:08.742682   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:08.866518   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:08.866954   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:08.866985   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:08.866896   68552 retry.go:31] will retry after 1.707701085s: waiting for machine to come up
	I0829 20:26:10.576676   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:10.577094   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:10.577124   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:10.577047   68552 retry.go:31] will retry after 1.496799212s: waiting for machine to come up
	I0829 20:26:12.075964   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:12.076412   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:12.076451   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:12.076377   68552 retry.go:31] will retry after 2.246779697s: waiting for machine to come up
	I0829 20:26:09.809078   66989 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.066360218s)
	I0829 20:26:09.809118   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:10.027517   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:10.095959   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:10.199656   66989 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:26:10.199745   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:10.700569   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:11.200798   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:11.700664   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:12.200052   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:12.700839   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:12.715319   66989 api_server.go:72] duration metric: took 2.515661322s to wait for apiserver process to appear ...
	I0829 20:26:12.715351   66989 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:26:12.715374   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:15.687527   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:26:15.687558   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:26:15.687572   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:15.716339   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:26:15.716365   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:26:15.716378   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:15.750700   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:15.750732   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:16.216255   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:16.224376   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:16.224401   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:16.715457   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:16.723983   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:16.724004   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:17.215562   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:17.219605   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0829 20:26:17.225473   66989 api_server.go:141] control plane version: v1.31.0
	I0829 20:26:17.225496   66989 api_server.go:131] duration metric: took 4.510137186s to wait for apiserver health ...
	I0829 20:26:17.225504   66989 cni.go:84] Creating CNI manager for ""
	I0829 20:26:17.225509   66989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:17.227379   66989 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:26:14.324452   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:14.324770   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:14.324808   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:14.324748   68552 retry.go:31] will retry after 3.172592587s: waiting for machine to come up
	I0829 20:26:17.500203   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:17.500540   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:17.500573   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:17.500485   68552 retry.go:31] will retry after 2.81386002s: waiting for machine to come up
	I0829 20:26:17.228505   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:26:17.238762   66989 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:26:17.264380   66989 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:26:17.274981   66989 system_pods.go:59] 8 kube-system pods found
	I0829 20:26:17.275009   66989 system_pods.go:61] "coredns-6f6b679f8f-dg6t6" [92e89b20-ebf4-4738-8ca7-9dc2a0e5653a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:26:17.275016   66989 system_pods.go:61] "etcd-embed-certs-388383" [a688325a-9ed2-488d-a1a1-aa440e37fa9f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 20:26:17.275023   66989 system_pods.go:61] "kube-apiserver-embed-certs-388383" [7a1b715b-87a3-44e0-868d-a3184f5b9f61] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 20:26:17.275028   66989 system_pods.go:61] "kube-controller-manager-embed-certs-388383" [9d942083-4d39-448c-8151-424ea9d5e6af] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 20:26:17.275033   66989 system_pods.go:61] "kube-proxy-fcxs4" [649b40c8-4f4b-40d1-8179-baf378d4c7d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 20:26:17.275038   66989 system_pods.go:61] "kube-scheduler-embed-certs-388383" [87b73013-dfad-411d-aaa9-f2c0e39fb920] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 20:26:17.275043   66989 system_pods.go:61] "metrics-server-6867b74b74-mx5jh" [99e21acd-b7b8-4e6f-8c75-c112206aed89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:26:17.275048   66989 system_pods.go:61] "storage-provisioner" [021ca156-b7a8-4647-8efe-db17968fd5a8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 20:26:17.275056   66989 system_pods.go:74] duration metric: took 10.656426ms to wait for pod list to return data ...
	I0829 20:26:17.275074   66989 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:26:17.279480   66989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:26:17.279504   66989 node_conditions.go:123] node cpu capacity is 2
	I0829 20:26:17.279519   66989 node_conditions.go:105] duration metric: took 4.439469ms to run NodePressure ...
	I0829 20:26:17.279537   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:17.561282   66989 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 20:26:17.565287   66989 kubeadm.go:739] kubelet initialised
	I0829 20:26:17.565307   66989 kubeadm.go:740] duration metric: took 4.002605ms waiting for restarted kubelet to initialise ...
	I0829 20:26:17.565314   66989 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:26:17.570104   66989 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:17.576425   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.576454   66989 pod_ready.go:82] duration metric: took 6.324083ms for pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:17.576464   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.576474   66989 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:17.582501   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "etcd-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.582523   66989 pod_ready.go:82] duration metric: took 6.040325ms for pod "etcd-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:17.582547   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "etcd-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.582556   66989 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:17.588534   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.588554   66989 pod_ready.go:82] duration metric: took 5.988678ms for pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:17.588562   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.588568   66989 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:17.668334   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.668365   66989 pod_ready.go:82] duration metric: took 79.787211ms for pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:17.668378   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.668386   66989 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fcxs4" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:18.068248   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "kube-proxy-fcxs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.068286   66989 pod_ready.go:82] duration metric: took 399.880238ms for pod "kube-proxy-fcxs4" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:18.068299   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "kube-proxy-fcxs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.068308   66989 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:18.468096   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.468126   66989 pod_ready.go:82] duration metric: took 399.810823ms for pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:18.468134   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.468141   66989 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:18.868444   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.868478   66989 pod_ready.go:82] duration metric: took 400.329102ms for pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:18.868490   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.868499   66989 pod_ready.go:39] duration metric: took 1.303176044s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:26:18.868519   66989 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 20:26:18.880892   66989 ops.go:34] apiserver oom_adj: -16
	I0829 20:26:18.880916   66989 kubeadm.go:597] duration metric: took 10.42010114s to restartPrimaryControlPlane
	I0829 20:26:18.880925   66989 kubeadm.go:394] duration metric: took 10.467899141s to StartCluster
	I0829 20:26:18.880946   66989 settings.go:142] acquiring lock: {Name:mka4cd5ddff5796cd0ca11509c181178f4f73529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:18.881032   66989 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:26:18.884130   66989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:18.884619   66989 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 20:26:18.884674   66989 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 20:26:18.884749   66989 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-388383"
	I0829 20:26:18.884765   66989 addons.go:69] Setting default-storageclass=true in profile "embed-certs-388383"
	I0829 20:26:18.884783   66989 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-388383"
	W0829 20:26:18.884792   66989 addons.go:243] addon storage-provisioner should already be in state true
	I0829 20:26:18.884804   66989 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-388383"
	I0829 20:26:18.884816   66989 addons.go:69] Setting metrics-server=true in profile "embed-certs-388383"
	I0829 20:26:18.884828   66989 host.go:66] Checking if "embed-certs-388383" exists ...
	I0829 20:26:18.884856   66989 addons.go:234] Setting addon metrics-server=true in "embed-certs-388383"
	W0829 20:26:18.884877   66989 addons.go:243] addon metrics-server should already be in state true
	I0829 20:26:18.884884   66989 config.go:182] Loaded profile config "embed-certs-388383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:26:18.884912   66989 host.go:66] Checking if "embed-certs-388383" exists ...
	I0829 20:26:18.885134   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.885176   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.885216   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.885249   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.885291   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.885338   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.886484   66989 out.go:177] * Verifying Kubernetes components...
	I0829 20:26:18.887938   66989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:18.900910   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33641
	I0829 20:26:18.901377   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.901917   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.901938   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.902300   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.903062   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.903110   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.903810   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I0829 20:26:18.903824   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38101
	I0829 20:26:18.904282   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.904303   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.904673   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.904691   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.904829   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.904845   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.905017   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.905428   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.905462   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.905664   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.905860   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:26:18.909388   66989 addons.go:234] Setting addon default-storageclass=true in "embed-certs-388383"
	W0829 20:26:18.909408   66989 addons.go:243] addon default-storageclass should already be in state true
	I0829 20:26:18.909437   66989 host.go:66] Checking if "embed-certs-388383" exists ...
	I0829 20:26:18.909793   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.909839   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.921180   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35467
	I0829 20:26:18.921597   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.922074   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.922087   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.922470   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.922697   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:26:18.922725   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39123
	I0829 20:26:18.923052   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.923592   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.923610   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.923919   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.924057   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:26:18.924063   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45681
	I0829 20:26:18.924461   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.924519   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:18.924984   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.925002   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.925632   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.925682   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:18.926152   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.926194   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.926494   66989 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:18.927266   66989 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 20:26:18.928130   66989 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:26:18.928141   66989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 20:26:18.928155   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:18.928843   66989 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 20:26:18.928863   66989 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 20:26:18.928888   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:18.931716   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.932273   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:18.932296   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.932424   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.932456   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:18.932644   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:18.932810   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:18.932869   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:18.932891   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.933050   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:18.933100   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:18.933271   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:18.933426   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:18.933598   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:18.942718   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38109
	I0829 20:26:18.943150   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.943532   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.943553   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.943908   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.944027   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:26:18.945304   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:18.945498   66989 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 20:26:18.945510   66989 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 20:26:18.945522   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:18.948108   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.948469   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:18.948494   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.948730   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:18.948889   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:18.949085   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:18.949222   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:19.111953   66989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:19.131195   66989 node_ready.go:35] waiting up to 6m0s for node "embed-certs-388383" to be "Ready" ...
	I0829 20:26:19.246857   66989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:26:19.269511   66989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 20:26:19.269670   66989 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 20:26:19.269691   66989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 20:26:19.346200   66989 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 20:26:19.346234   66989 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 20:26:19.374530   66989 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:26:19.374566   66989 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 20:26:19.418474   66989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:26:20.495022   66989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.225476769s)
	I0829 20:26:20.495077   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.495090   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.495185   66989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.248286753s)
	I0829 20:26:20.495232   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.495249   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.495572   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.495600   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.495611   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.495619   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.495634   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.495663   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Closing plugin on server side
	I0829 20:26:20.495664   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.495678   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.495688   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.496014   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.496029   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.496061   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Closing plugin on server side
	I0829 20:26:20.496097   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.496111   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.504149   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.504182   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.504419   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.504436   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.519341   66989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.100829284s)
	I0829 20:26:20.519396   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.519422   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.519670   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Closing plugin on server side
	I0829 20:26:20.519716   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.519734   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.519746   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.519755   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.520040   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.520055   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.520072   66989 addons.go:475] Verifying addon metrics-server=true in "embed-certs-388383"
	I0829 20:26:20.523102   66989 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0829 20:26:21.515365   68084 start.go:364] duration metric: took 2m4.795762476s to acquireMachinesLock for "default-k8s-diff-port-145096"
	I0829 20:26:21.515428   68084 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:26:21.515439   68084 fix.go:54] fixHost starting: 
	I0829 20:26:21.515864   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:21.515904   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:21.535441   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33171
	I0829 20:26:21.535886   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:21.536390   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:26:21.536414   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:21.536819   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:21.537035   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:21.537203   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:26:21.538735   68084 fix.go:112] recreateIfNeeded on default-k8s-diff-port-145096: state=Stopped err=<nil>
	I0829 20:26:21.538762   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	W0829 20:26:21.538901   68084 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:26:21.540852   68084 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-145096" ...
	I0829 20:26:21.542258   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Start
	I0829 20:26:21.542429   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Ensuring networks are active...
	I0829 20:26:21.543181   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Ensuring network default is active
	I0829 20:26:21.543522   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Ensuring network mk-default-k8s-diff-port-145096 is active
	I0829 20:26:21.543872   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Getting domain xml...
	I0829 20:26:21.544627   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Creating domain...
	I0829 20:26:20.317138   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.317672   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has current primary IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.317700   67607 main.go:141] libmachine: (old-k8s-version-032002) Found IP for machine: 192.168.39.116
	I0829 20:26:20.317716   67607 main.go:141] libmachine: (old-k8s-version-032002) Reserving static IP address...
	I0829 20:26:20.318143   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "old-k8s-version-032002", mac: "52:54:00:a8:ca:96", ip: "192.168.39.116"} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.318169   67607 main.go:141] libmachine: (old-k8s-version-032002) Reserved static IP address: 192.168.39.116
	I0829 20:26:20.318189   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | skip adding static IP to network mk-old-k8s-version-032002 - found existing host DHCP lease matching {name: "old-k8s-version-032002", mac: "52:54:00:a8:ca:96", ip: "192.168.39.116"}
	I0829 20:26:20.318208   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | Getting to WaitForSSH function...
	I0829 20:26:20.318217   67607 main.go:141] libmachine: (old-k8s-version-032002) Waiting for SSH to be available...
	I0829 20:26:20.320598   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.320961   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.320989   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.321082   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | Using SSH client type: external
	I0829 20:26:20.321121   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa (-rw-------)
	I0829 20:26:20.321156   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:20.321171   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | About to run SSH command:
	I0829 20:26:20.321185   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | exit 0
	I0829 20:26:20.446805   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:20.447204   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetConfigRaw
	I0829 20:26:20.447944   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:20.450726   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.451120   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.451160   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.451464   67607 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/config.json ...
	I0829 20:26:20.451670   67607 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:20.451690   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:20.451886   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.454120   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.454496   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.454566   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.454648   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.454808   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.454975   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.455123   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.455282   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:20.455520   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:20.455533   67607 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:20.555074   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:20.555100   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetMachineName
	I0829 20:26:20.555331   67607 buildroot.go:166] provisioning hostname "old-k8s-version-032002"
	I0829 20:26:20.555353   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetMachineName
	I0829 20:26:20.555540   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.558576   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.559058   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.559086   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.559273   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.559490   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.559661   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.559834   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.560026   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:20.560189   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:20.560201   67607 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-032002 && echo "old-k8s-version-032002" | sudo tee /etc/hostname
	I0829 20:26:20.675352   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-032002
	
	I0829 20:26:20.675400   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.678472   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.678908   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.678944   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.679139   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.679341   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.679533   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.679710   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.679884   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:20.680090   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:20.680108   67607 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-032002' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-032002/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-032002' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:20.789673   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:20.789713   67607 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:20.789744   67607 buildroot.go:174] setting up certificates
	I0829 20:26:20.789753   67607 provision.go:84] configureAuth start
	I0829 20:26:20.789761   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetMachineName
	I0829 20:26:20.790067   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:20.792822   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.793152   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.793173   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.793338   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.795624   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.795948   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.795974   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.796080   67607 provision.go:143] copyHostCerts
	I0829 20:26:20.796148   67607 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:20.796168   67607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:20.796236   67607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:20.796344   67607 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:20.796355   67607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:20.796387   67607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:20.796467   67607 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:20.796476   67607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:20.796503   67607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:20.796573   67607 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-032002 san=[127.0.0.1 192.168.39.116 localhost minikube old-k8s-version-032002]
	I0829 20:26:20.906382   67607 provision.go:177] copyRemoteCerts
	I0829 20:26:20.906436   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:20.906466   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.909180   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.909488   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.909519   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.909666   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.909831   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.909963   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.910062   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:20.989017   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:21.018571   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0829 20:26:21.043015   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 20:26:21.067288   67607 provision.go:87] duration metric: took 277.522292ms to configureAuth
	I0829 20:26:21.067322   67607 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:21.067527   67607 config.go:182] Loaded profile config "old-k8s-version-032002": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0829 20:26:21.067607   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.070264   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.070642   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.070679   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.070881   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.071088   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.071288   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.071465   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.071661   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:21.071886   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:21.071923   67607 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:21.290979   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:21.291003   67607 machine.go:96] duration metric: took 839.319831ms to provisionDockerMachine
	I0829 20:26:21.291014   67607 start.go:293] postStartSetup for "old-k8s-version-032002" (driver="kvm2")
	I0829 20:26:21.291026   67607 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:21.291046   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.291342   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:21.291366   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.293946   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.294245   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.294273   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.294464   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.294686   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.294840   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.294964   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:21.373592   67607 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:21.377797   67607 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:21.377826   67607 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:21.377892   67607 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:21.377966   67607 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:21.378054   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:21.387886   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:21.413456   67607 start.go:296] duration metric: took 122.429334ms for postStartSetup
	I0829 20:26:21.413497   67607 fix.go:56] duration metric: took 18.810093949s for fixHost
	I0829 20:26:21.413522   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.416095   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.416391   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.416418   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.416594   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.416803   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.416970   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.417115   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.417272   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:21.417474   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:21.417489   67607 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:21.515167   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963181.486447470
	
	I0829 20:26:21.515190   67607 fix.go:216] guest clock: 1724963181.486447470
	I0829 20:26:21.515200   67607 fix.go:229] Guest: 2024-08-29 20:26:21.48644747 +0000 UTC Remote: 2024-08-29 20:26:21.413502498 +0000 UTC m=+222.629982255 (delta=72.944972ms)
	I0829 20:26:21.515225   67607 fix.go:200] guest clock delta is within tolerance: 72.944972ms
	I0829 20:26:21.515232   67607 start.go:83] releasing machines lock for "old-k8s-version-032002", held for 18.911866017s
	I0829 20:26:21.515278   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.515596   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:21.518247   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.518682   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.518710   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.518835   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.519413   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.519589   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.519680   67607 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:21.519736   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.519843   67607 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:21.519869   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.522261   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.522561   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.522614   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.522643   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.522763   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.522919   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.523044   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.523071   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.523073   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.523241   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.523240   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:21.523413   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.523560   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.523712   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:21.599524   67607 ssh_runner.go:195] Run: systemctl --version
	I0829 20:26:21.629122   67607 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:26:21.778437   67607 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:26:21.784642   67607 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:26:21.784714   67607 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:26:21.802019   67607 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:26:21.802043   67607 start.go:495] detecting cgroup driver to use...
	I0829 20:26:21.802100   67607 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:26:21.817407   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:26:21.831514   67607 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:26:21.831578   67607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:26:21.845224   67607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:26:21.858522   67607 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:26:21.972769   67607 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:26:22.115154   67607 docker.go:233] disabling docker service ...
	I0829 20:26:22.115240   67607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:26:22.130015   67607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:26:22.143186   67607 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:26:22.294113   67607 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:26:22.432373   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:26:22.446427   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:26:22.465151   67607 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0829 20:26:22.465218   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.476104   67607 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:26:22.476177   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.486627   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.497782   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.509869   67607 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:26:22.521347   67607 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:26:22.531406   67607 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:26:22.531455   67607 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:26:22.544949   67607 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:26:22.554918   67607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:22.687909   67607 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:26:22.808522   67607 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:26:22.808595   67607 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:26:22.814348   67607 start.go:563] Will wait 60s for crictl version
	I0829 20:26:22.814411   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:22.818348   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:26:22.863797   67607 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:26:22.863883   67607 ssh_runner.go:195] Run: crio --version
	I0829 20:26:22.893173   67607 ssh_runner.go:195] Run: crio --version
	I0829 20:26:22.923146   67607 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0829 20:26:22.924299   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:22.927222   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:22.927564   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:22.927589   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:22.927772   67607 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 20:26:22.932100   67607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:22.945139   67607 kubeadm.go:883] updating cluster {Name:old-k8s-version-032002 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:26:22.945274   67607 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 20:26:22.945334   67607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:22.990592   67607 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 20:26:22.990668   67607 ssh_runner.go:195] Run: which lz4
	I0829 20:26:22.995104   67607 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 20:26:22.999667   67607 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 20:26:22.999703   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0829 20:26:20.524280   66989 addons.go:510] duration metric: took 1.639608208s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0829 20:26:21.135090   66989 node_ready.go:53] node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:23.136839   66989 node_ready.go:53] node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:22.825998   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting to get IP...
	I0829 20:26:22.827278   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:22.827766   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:22.827883   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:22.827750   68757 retry.go:31] will retry after 212.207753ms: waiting for machine to come up
	I0829 20:26:23.041113   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.041553   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.041588   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:23.041508   68757 retry.go:31] will retry after 291.9464ms: waiting for machine to come up
	I0829 20:26:23.335081   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.336072   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.336121   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:23.336041   68757 retry.go:31] will retry after 478.578755ms: waiting for machine to come up
	I0829 20:26:23.816669   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.817178   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.817233   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:23.817087   68757 retry.go:31] will retry after 501.093836ms: waiting for machine to come up
	I0829 20:26:24.319836   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:24.320392   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:24.320418   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:24.320343   68757 retry.go:31] will retry after 524.430407ms: waiting for machine to come up
	I0829 20:26:24.846908   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:24.847388   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:24.847418   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:24.847361   68757 retry.go:31] will retry after 701.573237ms: waiting for machine to come up
	I0829 20:26:25.550328   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:25.550786   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:25.550811   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:25.550727   68757 retry.go:31] will retry after 916.084079ms: waiting for machine to come up
	I0829 20:26:26.468529   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:26.468981   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:26.469012   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:26.468921   68757 retry.go:31] will retry after 1.216322833s: waiting for machine to come up
	I0829 20:26:24.727216   67607 crio.go:462] duration metric: took 1.732148589s to copy over tarball
	I0829 20:26:24.727294   67607 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 20:26:27.715640   67607 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.988318238s)
	I0829 20:26:27.715664   67607 crio.go:469] duration metric: took 2.988419957s to extract the tarball
	I0829 20:26:27.715672   67607 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 20:26:27.764192   67607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:27.797388   67607 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 20:26:27.797422   67607 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 20:26:27.797501   67607 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:27.797536   67607 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0829 20:26:27.797549   67607 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:27.797557   67607 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0829 20:26:27.797511   67607 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:27.797629   67607 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:27.797637   67607 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:27.797519   67607 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:27.799128   67607 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:27.799208   67607 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0829 20:26:27.799251   67607 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0829 20:26:27.799361   67607 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:27.799386   67607 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:27.799463   67607 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:27.799697   67607 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:27.799830   67607 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:27.978022   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:27.978296   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:27.981616   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:27.998987   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.001078   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.004185   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.004672   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0829 20:26:28.103885   67607 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0829 20:26:28.103953   67607 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.104013   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.122203   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:28.129983   67607 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0829 20:26:28.130028   67607 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.130076   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.165427   67607 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0829 20:26:28.165470   67607 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.165521   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.199971   67607 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0829 20:26:28.199990   67607 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0829 20:26:28.200015   67607 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.200021   67607 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.200062   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.200105   67607 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0829 20:26:28.200155   67607 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.200199   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.200204   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.200062   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.200113   67607 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0829 20:26:28.200325   67607 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0829 20:26:28.200356   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.329091   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.329139   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.329187   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.329260   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.329316   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.329362   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:26:28.329316   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.484805   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.484857   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.484888   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.484943   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:26:28.484963   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.485009   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.487351   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.615121   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.615187   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.645371   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.645433   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:26:28.645524   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.645573   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.645638   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0829 20:26:28.729141   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0829 20:26:28.762530   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0829 20:26:28.762592   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0829 20:26:28.782117   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0829 20:26:28.782155   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0829 20:26:28.782195   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0829 20:26:28.782229   67607 cache_images.go:92] duration metric: took 984.791099ms to LoadCachedImages
	W0829 20:26:28.782293   67607 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0829 20:26:28.782310   67607 kubeadm.go:934] updating node { 192.168.39.116 8443 v1.20.0 crio true true} ...
	I0829 20:26:28.782452   67607 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-032002 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:26:28.782518   67607 ssh_runner.go:195] Run: crio config
	I0829 20:26:25.635616   66989 node_ready.go:53] node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:26.635463   66989 node_ready.go:49] node "embed-certs-388383" has status "Ready":"True"
	I0829 20:26:26.635488   66989 node_ready.go:38] duration metric: took 7.504259002s for node "embed-certs-388383" to be "Ready" ...
	I0829 20:26:26.635497   66989 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:26:26.641316   66989 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:26.649602   66989 pod_ready.go:93] pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:26.649634   66989 pod_ready.go:82] duration metric: took 8.284428ms for pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:26.649656   66989 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:28.658281   66989 pod_ready.go:103] pod "etcd-embed-certs-388383" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:27.686642   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:27.687071   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:27.687097   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:27.687030   68757 retry.go:31] will retry after 1.410599528s: waiting for machine to come up
	I0829 20:26:29.099622   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:29.100175   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:29.100207   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:29.100083   68757 retry.go:31] will retry after 1.929618787s: waiting for machine to come up
	I0829 20:26:31.031864   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:31.032434   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:31.032467   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:31.032367   68757 retry.go:31] will retry after 1.926271655s: waiting for machine to come up
	I0829 20:26:28.832785   67607 cni.go:84] Creating CNI manager for ""
	I0829 20:26:28.832807   67607 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:28.832824   67607 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:26:28.832843   67607 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.116 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-032002 NodeName:old-k8s-version-032002 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0829 20:26:28.832982   67607 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-032002"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:26:28.833059   67607 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0829 20:26:28.843483   67607 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:26:28.843566   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:26:28.853276   67607 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0829 20:26:28.870579   67607 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:26:28.888053   67607 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0829 20:26:28.905988   67607 ssh_runner.go:195] Run: grep 192.168.39.116	control-plane.minikube.internal$ /etc/hosts
	I0829 20:26:28.910048   67607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:28.924996   67607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:29.075015   67607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:29.095381   67607 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002 for IP: 192.168.39.116
	I0829 20:26:29.095411   67607 certs.go:194] generating shared ca certs ...
	I0829 20:26:29.095430   67607 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:29.095605   67607 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:26:29.095686   67607 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:26:29.095706   67607 certs.go:256] generating profile certs ...
	I0829 20:26:29.095847   67607 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.key
	I0829 20:26:29.095928   67607 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.key.a1a2aebb
	I0829 20:26:29.095984   67607 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.key
	I0829 20:26:29.096135   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:26:29.096184   67607 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:26:29.096198   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:26:29.096227   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:26:29.096259   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:26:29.096299   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:26:29.096378   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:29.097276   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:26:29.144259   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:26:29.171420   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:26:29.198554   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:26:29.230750   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0829 20:26:29.269978   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 20:26:29.299839   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:26:29.333742   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 20:26:29.358352   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:26:29.382648   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:26:29.406773   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:26:29.434106   67607 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:26:29.451913   67607 ssh_runner.go:195] Run: openssl version
	I0829 20:26:29.457722   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:26:29.469147   67607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:26:29.474048   67607 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:26:29.474094   67607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:26:29.480082   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:26:29.491083   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:26:29.501994   67607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:29.508594   67607 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:29.508643   67607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:29.516331   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:26:29.531067   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:26:29.543998   67607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:26:29.548781   67607 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:26:29.548845   67607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:26:29.555052   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:26:29.567902   67607 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:26:29.572879   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:26:29.579506   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:26:29.585887   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:26:29.592262   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:26:29.598566   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:26:29.604672   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:26:29.610830   67607 kubeadm.go:392] StartCluster: {Name:old-k8s-version-032002 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:26:29.612915   67607 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:26:29.613015   67607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:29.655224   67607 cri.go:89] found id: ""
	I0829 20:26:29.655314   67607 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:26:29.666216   67607 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:26:29.666241   67607 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:26:29.666292   67607 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:26:29.676908   67607 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:26:29.678276   67607 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-032002" does not appear in /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:26:29.679313   67607 kubeconfig.go:62] /home/jenkins/minikube-integration/19530-11185/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-032002" cluster setting kubeconfig missing "old-k8s-version-032002" context setting]
	I0829 20:26:29.680756   67607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:29.764872   67607 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:26:29.776873   67607 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.116
	I0829 20:26:29.776914   67607 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:26:29.776926   67607 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:26:29.776987   67607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:29.819268   67607 cri.go:89] found id: ""
	I0829 20:26:29.819347   67607 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:26:29.840386   67607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:26:29.851624   67607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:26:29.851650   67607 kubeadm.go:157] found existing configuration files:
	
	I0829 20:26:29.851710   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:26:29.861439   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:26:29.861504   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:26:29.871594   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:26:29.881126   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:26:29.881199   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:26:29.890984   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:26:29.900838   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:26:29.900913   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:26:29.910677   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:26:29.920008   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:26:29.920073   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:26:29.929631   67607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:26:29.939864   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:30.096029   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:30.816696   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:31.043310   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:31.139291   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:31.248095   67607 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:26:31.248190   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:31.749101   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:32.248718   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:32.748783   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:33.248254   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:33.748557   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:30.180025   66989 pod_ready.go:93] pod "etcd-embed-certs-388383" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:30.180056   66989 pod_ready.go:82] duration metric: took 3.530390258s for pod "etcd-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:30.180069   66989 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.187272   66989 pod_ready.go:93] pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:32.187300   66989 pod_ready.go:82] duration metric: took 2.007222016s for pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.187313   66989 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.192038   66989 pod_ready.go:93] pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:32.192062   66989 pod_ready.go:82] duration metric: took 4.740656ms for pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.192075   66989 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fcxs4" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.196712   66989 pod_ready.go:93] pod "kube-proxy-fcxs4" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:32.196736   66989 pod_ready.go:82] duration metric: took 4.653538ms for pod "kube-proxy-fcxs4" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.196748   66989 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.200491   66989 pod_ready.go:93] pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:32.200517   66989 pod_ready.go:82] duration metric: took 3.758002ms for pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.200528   66989 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:34.207857   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:32.960872   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:32.961256   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:32.961284   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:32.961208   68757 retry.go:31] will retry after 2.304628323s: waiting for machine to come up
	I0829 20:26:35.267593   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:35.268009   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:35.268041   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:35.267970   68757 retry.go:31] will retry after 3.753063387s: waiting for machine to come up
	I0829 20:26:34.249231   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:34.748279   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:35.249171   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:35.748943   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:36.249181   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:36.748307   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:37.248484   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:37.748261   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:38.248332   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:38.748423   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:36.705814   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:38.708205   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:40.175557   66841 start.go:364] duration metric: took 53.54411059s to acquireMachinesLock for "no-preload-397724"
	I0829 20:26:40.175617   66841 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:26:40.175626   66841 fix.go:54] fixHost starting: 
	I0829 20:26:40.176060   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:40.176098   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:40.193828   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45897
	I0829 20:26:40.194231   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:40.194840   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:26:40.194867   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:40.195175   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:40.195364   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:40.195528   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:26:40.197109   66841 fix.go:112] recreateIfNeeded on no-preload-397724: state=Stopped err=<nil>
	I0829 20:26:40.197128   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	W0829 20:26:40.197278   66841 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:26:40.199263   66841 out.go:177] * Restarting existing kvm2 VM for "no-preload-397724" ...
	I0829 20:26:39.023902   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.024374   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Found IP for machine: 192.168.72.140
	I0829 20:26:39.024399   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has current primary IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.024413   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Reserving static IP address...
	I0829 20:26:39.024832   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Reserved static IP address: 192.168.72.140
	I0829 20:26:39.024856   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for SSH to be available...
	I0829 20:26:39.024894   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-145096", mac: "52:54:00:36:fe:e0", ip: "192.168.72.140"} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.024925   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | skip adding static IP to network mk-default-k8s-diff-port-145096 - found existing host DHCP lease matching {name: "default-k8s-diff-port-145096", mac: "52:54:00:36:fe:e0", ip: "192.168.72.140"}
	I0829 20:26:39.024947   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Getting to WaitForSSH function...
	I0829 20:26:39.026796   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.027100   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.027129   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.027265   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Using SSH client type: external
	I0829 20:26:39.027288   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa (-rw-------)
	I0829 20:26:39.027318   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:39.027333   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | About to run SSH command:
	I0829 20:26:39.027346   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | exit 0
	I0829 20:26:39.146830   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:39.147242   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetConfigRaw
	I0829 20:26:39.147931   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetIP
	I0829 20:26:39.150652   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.151055   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.151084   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.151395   68084 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/config.json ...
	I0829 20:26:39.151581   68084 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:39.151601   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:39.151814   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.153861   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.154189   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.154222   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.154351   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.154575   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.154746   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.154875   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.155010   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:39.155219   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:39.155235   68084 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:39.258973   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:39.259006   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetMachineName
	I0829 20:26:39.259261   68084 buildroot.go:166] provisioning hostname "default-k8s-diff-port-145096"
	I0829 20:26:39.259292   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetMachineName
	I0829 20:26:39.259467   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.262018   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.262472   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.262501   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.262707   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.262886   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.263034   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.263185   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.263344   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:39.263530   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:39.263547   68084 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-145096 && echo "default-k8s-diff-port-145096" | sudo tee /etc/hostname
	I0829 20:26:39.379437   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-145096
	
	I0829 20:26:39.379479   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.382263   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.382682   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.382704   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.382913   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.383128   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.383280   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.383389   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.383520   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:39.383675   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:39.383692   68084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-145096' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-145096/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-145096' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:39.491756   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:39.491790   68084 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:39.491855   68084 buildroot.go:174] setting up certificates
	I0829 20:26:39.491869   68084 provision.go:84] configureAuth start
	I0829 20:26:39.491883   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetMachineName
	I0829 20:26:39.492150   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetIP
	I0829 20:26:39.494882   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.495241   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.495269   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.495452   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.497708   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.497980   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.498013   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.498097   68084 provision.go:143] copyHostCerts
	I0829 20:26:39.498157   68084 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:39.498179   68084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:39.498249   68084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:39.498347   68084 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:39.498356   68084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:39.498377   68084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:39.498430   68084 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:39.498437   68084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:39.498455   68084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:39.498507   68084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-145096 san=[127.0.0.1 192.168.72.140 default-k8s-diff-port-145096 localhost minikube]
	I0829 20:26:39.584313   68084 provision.go:177] copyRemoteCerts
	I0829 20:26:39.584372   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:39.584398   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.587054   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.587377   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.587400   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.587630   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.587823   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.587952   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.588087   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:26:39.664394   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:39.688852   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0829 20:26:39.714653   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 20:26:39.737662   68084 provision.go:87] duration metric: took 245.781265ms to configureAuth
	I0829 20:26:39.737687   68084 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:39.737844   68084 config.go:182] Loaded profile config "default-k8s-diff-port-145096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:26:39.737911   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.740391   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.740659   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.740688   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.740911   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.741107   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.741256   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.741434   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.741612   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:39.741777   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:39.741794   68084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:39.954811   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:39.954846   68084 machine.go:96] duration metric: took 803.251945ms to provisionDockerMachine
	I0829 20:26:39.954862   68084 start.go:293] postStartSetup for "default-k8s-diff-port-145096" (driver="kvm2")
	I0829 20:26:39.954877   68084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:39.954898   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:39.955237   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:39.955267   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.958071   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.958575   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.958605   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.958772   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.958969   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.959126   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.959287   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:26:40.037153   68084 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:40.041150   68084 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:40.041176   68084 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:40.041235   68084 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:40.041325   68084 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:40.041415   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:40.050654   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:40.073789   68084 start.go:296] duration metric: took 118.907407ms for postStartSetup
	I0829 20:26:40.073826   68084 fix.go:56] duration metric: took 18.558388385s for fixHost
	I0829 20:26:40.073846   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:40.076397   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.076749   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:40.076789   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.076999   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:40.077200   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:40.077374   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:40.077480   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:40.077598   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:40.077754   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:40.077765   68084 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:40.175410   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963200.123461148
	
	I0829 20:26:40.175431   68084 fix.go:216] guest clock: 1724963200.123461148
	I0829 20:26:40.175437   68084 fix.go:229] Guest: 2024-08-29 20:26:40.123461148 +0000 UTC Remote: 2024-08-29 20:26:40.073830105 +0000 UTC m=+143.488576066 (delta=49.631043ms)
	I0829 20:26:40.175456   68084 fix.go:200] guest clock delta is within tolerance: 49.631043ms
	I0829 20:26:40.175463   68084 start.go:83] releasing machines lock for "default-k8s-diff-port-145096", held for 18.660059953s
	I0829 20:26:40.175497   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:40.175781   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetIP
	I0829 20:26:40.179031   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.179457   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:40.179495   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.179695   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:40.180256   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:40.180444   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:40.180528   68084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:40.180581   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:40.180706   68084 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:40.180729   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:40.183580   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.183819   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.183963   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:40.183989   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.184172   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:40.184174   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:40.184213   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.184345   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:40.184416   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:40.184511   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:40.184624   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:40.184626   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:40.184794   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:26:40.184896   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:26:40.259854   68084 ssh_runner.go:195] Run: systemctl --version
	I0829 20:26:40.290102   68084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:26:40.439112   68084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:26:40.449465   68084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:26:40.449546   68084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:26:40.471182   68084 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:26:40.471209   68084 start.go:495] detecting cgroup driver to use...
	I0829 20:26:40.471276   68084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:26:40.492605   68084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:26:40.508500   68084 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:26:40.508561   68084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:26:40.527534   68084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:26:40.542013   68084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:26:40.663843   68084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:26:40.837228   68084 docker.go:233] disabling docker service ...
	I0829 20:26:40.837293   68084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:26:40.854285   68084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:26:40.870148   68084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:26:41.017156   68084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:26:41.150436   68084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:26:41.165239   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:26:41.184783   68084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 20:26:41.184847   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.197358   68084 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:26:41.197417   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.211222   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.225297   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.237205   68084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:26:41.249875   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.261928   68084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.286145   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.299119   68084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:26:41.313001   68084 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:26:41.313062   68084 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:26:41.335390   68084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:26:41.348803   68084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:41.464387   68084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:26:41.564675   68084 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:26:41.564746   68084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:26:41.569620   68084 start.go:563] Will wait 60s for crictl version
	I0829 20:26:41.569680   68084 ssh_runner.go:195] Run: which crictl
	I0829 20:26:41.573519   68084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:26:41.615105   68084 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:26:41.615190   68084 ssh_runner.go:195] Run: crio --version
	I0829 20:26:41.644597   68084 ssh_runner.go:195] Run: crio --version
	I0829 20:26:41.678211   68084 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 20:26:39.248306   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:39.748958   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:40.248975   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:40.748948   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:41.249144   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:41.749013   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:42.248363   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:42.748624   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:43.248833   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:43.748535   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:40.200748   66841 main.go:141] libmachine: (no-preload-397724) Calling .Start
	I0829 20:26:40.200955   66841 main.go:141] libmachine: (no-preload-397724) Ensuring networks are active...
	I0829 20:26:40.201793   66841 main.go:141] libmachine: (no-preload-397724) Ensuring network default is active
	I0829 20:26:40.202128   66841 main.go:141] libmachine: (no-preload-397724) Ensuring network mk-no-preload-397724 is active
	I0829 20:26:40.202729   66841 main.go:141] libmachine: (no-preload-397724) Getting domain xml...
	I0829 20:26:40.203538   66841 main.go:141] libmachine: (no-preload-397724) Creating domain...
	I0829 20:26:41.516739   66841 main.go:141] libmachine: (no-preload-397724) Waiting to get IP...
	I0829 20:26:41.517840   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:41.518273   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:41.518353   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:41.518262   68926 retry.go:31] will retry after 295.070588ms: waiting for machine to come up
	I0829 20:26:41.814782   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:41.815346   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:41.815369   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:41.815291   68926 retry.go:31] will retry after 239.48527ms: waiting for machine to come up
	I0829 20:26:42.056957   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:42.057459   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:42.057509   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:42.057436   68926 retry.go:31] will retry after 452.012872ms: waiting for machine to come up
	I0829 20:26:42.511068   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:42.511551   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:42.511590   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:42.511520   68926 retry.go:31] will retry after 552.227159ms: waiting for machine to come up
	I0829 20:26:43.066096   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:43.066642   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:43.066673   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:43.066605   68926 retry.go:31] will retry after 666.699647ms: waiting for machine to come up
	I0829 20:26:43.734695   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:43.735402   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:43.735430   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:43.735309   68926 retry.go:31] will retry after 770.756485ms: waiting for machine to come up
	I0829 20:26:40.709553   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:42.712799   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:41.679441   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetIP
	I0829 20:26:41.682807   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:41.683205   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:41.683236   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:41.683489   68084 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0829 20:26:41.688766   68084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:41.705764   68084 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-145096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:26:41.705918   68084 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:26:41.705977   68084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:41.752884   68084 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 20:26:41.752955   68084 ssh_runner.go:195] Run: which lz4
	I0829 20:26:41.757600   68084 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 20:26:41.762158   68084 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 20:26:41.762188   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 20:26:43.201094   68084 crio.go:462] duration metric: took 1.443534343s to copy over tarball
	I0829 20:26:43.201176   68084 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 20:26:45.400911   68084 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.199703125s)
	I0829 20:26:45.400942   68084 crio.go:469] duration metric: took 2.199820098s to extract the tarball
	I0829 20:26:45.400948   68084 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 20:26:45.439120   68084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:45.482658   68084 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 20:26:45.482679   68084 cache_images.go:84] Images are preloaded, skipping loading
	I0829 20:26:45.482687   68084 kubeadm.go:934] updating node { 192.168.72.140 8444 v1.31.0 crio true true} ...
	I0829 20:26:45.482801   68084 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-145096 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:26:45.482873   68084 ssh_runner.go:195] Run: crio config
	I0829 20:26:45.532108   68084 cni.go:84] Creating CNI manager for ""
	I0829 20:26:45.532132   68084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:45.532146   68084 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:26:45.532169   68084 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.140 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-145096 NodeName:default-k8s-diff-port-145096 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 20:26:45.532310   68084 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.140
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-145096"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:26:45.532367   68084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 20:26:45.542670   68084 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:26:45.542744   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:26:45.552622   68084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0829 20:26:45.569765   68084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:26:45.590972   68084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0829 20:26:45.611421   68084 ssh_runner.go:195] Run: grep 192.168.72.140	control-plane.minikube.internal$ /etc/hosts
	I0829 20:26:45.615585   68084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:45.627911   68084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:45.757504   68084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:45.776103   68084 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096 for IP: 192.168.72.140
	I0829 20:26:45.776128   68084 certs.go:194] generating shared ca certs ...
	I0829 20:26:45.776159   68084 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:45.776337   68084 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:26:45.776388   68084 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:26:45.776400   68084 certs.go:256] generating profile certs ...
	I0829 20:26:45.776511   68084 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/client.key
	I0829 20:26:45.776600   68084 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/apiserver.key.5a49b6b2
	I0829 20:26:45.776650   68084 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/proxy-client.key
	I0829 20:26:45.776788   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:26:45.776827   68084 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:26:45.776840   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:26:45.776869   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:26:45.776940   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:26:45.776977   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:26:45.777035   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:45.777916   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:26:45.823419   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:26:45.868291   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:26:45.905178   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:26:45.934956   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0829 20:26:45.967570   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 20:26:45.994332   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:26:46.019268   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 20:26:46.044075   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:26:46.067906   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:26:46.092513   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:26:46.117686   68084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:26:46.137048   68084 ssh_runner.go:195] Run: openssl version
	I0829 20:26:46.143203   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:26:46.156407   68084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:46.161397   68084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:46.161461   68084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:46.167587   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:26:46.179034   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:26:46.190204   68084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:26:46.194953   68084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:26:46.195010   68084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:26:46.203121   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:26:46.218606   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:26:46.233586   68084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:26:46.240100   68084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:26:46.240155   68084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:26:46.247473   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:26:46.259417   68084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:26:46.264875   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:26:46.270914   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:26:46.277211   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:26:46.283138   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:26:46.289137   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:26:46.295044   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:26:46.301027   68084 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-145096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:26:46.301120   68084 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:26:46.301177   68084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:46.342913   68084 cri.go:89] found id: ""
	I0829 20:26:46.342988   68084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:26:46.354198   68084 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:26:46.354221   68084 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:26:46.354269   68084 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:26:46.364173   68084 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:26:46.365182   68084 kubeconfig.go:125] found "default-k8s-diff-port-145096" server: "https://192.168.72.140:8444"
	I0829 20:26:46.367560   68084 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:26:46.377550   68084 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.140
	I0829 20:26:46.377584   68084 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:26:46.377596   68084 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:26:46.377647   68084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:46.419141   68084 cri.go:89] found id: ""
	I0829 20:26:46.419215   68084 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:26:46.438037   68084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:26:46.449021   68084 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:26:46.449041   68084 kubeadm.go:157] found existing configuration files:
	
	I0829 20:26:46.449093   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0829 20:26:46.459396   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:26:46.459445   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:26:46.469964   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0829 20:26:46.479604   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:26:46.479655   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:26:46.492672   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0829 20:26:46.504656   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:26:46.504714   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:26:46.520206   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0829 20:26:46.532067   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:26:46.532137   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:26:46.541931   68084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:26:46.551973   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:44.248615   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:44.748528   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:45.248257   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:45.748453   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:46.248927   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:46.748628   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:47.248556   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:47.748332   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.248373   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.749111   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:44.507808   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:44.508340   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:44.508375   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:44.508288   68926 retry.go:31] will retry after 754.614285ms: waiting for machine to come up
	I0829 20:26:45.264587   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:45.265039   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:45.265065   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:45.265003   68926 retry.go:31] will retry after 1.3758308s: waiting for machine to come up
	I0829 20:26:46.642139   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:46.642666   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:46.642690   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:46.642612   68926 retry.go:31] will retry after 1.255043608s: waiting for machine to come up
	I0829 20:26:47.899849   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:47.900330   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:47.900360   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:47.900291   68926 retry.go:31] will retry after 1.517293529s: waiting for machine to come up
	I0829 20:26:45.208067   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:48.177040   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:46.668397   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:47.497182   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:47.725573   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:47.785427   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:47.850878   68084 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:26:47.850972   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.351404   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.852023   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.351402   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.367249   68084 api_server.go:72] duration metric: took 1.516370766s to wait for apiserver process to appear ...
	I0829 20:26:49.367283   68084 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:26:49.367312   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:51.595653   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:26:51.595683   68084 api_server.go:103] status: https://192.168.72.140:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:26:51.595698   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:51.609883   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:26:51.609989   68084 api_server.go:103] status: https://192.168.72.140:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:26:51.867454   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:51.872297   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:51.872328   68084 api_server.go:103] status: https://192.168.72.140:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:52.367462   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:52.375300   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:52.375333   68084 api_server.go:103] status: https://192.168.72.140:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:52.867827   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:52.872814   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 200:
	ok
	I0829 20:26:52.881061   68084 api_server.go:141] control plane version: v1.31.0
	I0829 20:26:52.881092   68084 api_server.go:131] duration metric: took 3.513801329s to wait for apiserver health ...
	I0829 20:26:52.881102   68084 cni.go:84] Creating CNI manager for ""
	I0829 20:26:52.881111   68084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:52.882993   68084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:26:49.248291   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.748360   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:50.248427   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:50.749087   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:51.248381   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:51.748488   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:52.249250   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:52.748715   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:53.249248   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:53.748915   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.419781   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:49.420286   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:49.420314   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:49.420244   68926 retry.go:31] will retry after 2.638145598s: waiting for machine to come up
	I0829 20:26:52.059935   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:52.060367   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:52.060411   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:52.060341   68926 retry.go:31] will retry after 2.696474949s: waiting for machine to come up
	I0829 20:26:50.207945   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:52.709407   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:52.884310   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:26:52.901134   68084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:26:52.931390   68084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:26:52.952109   68084 system_pods.go:59] 8 kube-system pods found
	I0829 20:26:52.952154   68084 system_pods.go:61] "coredns-6f6b679f8f-5mkxp" [1d3c3a01-1fa6-4d1d-8750-deef4475ba96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:26:52.952166   68084 system_pods.go:61] "etcd-default-k8s-diff-port-145096" [03096d69-48af-4372-9fa0-5a45dcb9603c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 20:26:52.952177   68084 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-145096" [4be8793a-7934-4c89-a840-49e769673f5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 20:26:52.952188   68084 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-145096" [a3bec7f8-8163-4afa-af53-282ad755b788] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 20:26:52.952202   68084 system_pods.go:61] "kube-proxy-b4ffx" [d97e74d5-21d4-4c96-9d94-77767fc4e609] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 20:26:52.952210   68084 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-145096" [c416b52b-ebf4-4714-bed6-3d25bfaa373c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 20:26:52.952217   68084 system_pods.go:61] "metrics-server-6867b74b74-5kk6q" [e74224b1-8242-4f7f-b8d6-7d9d4839be53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:26:52.952224   68084 system_pods.go:61] "storage-provisioner" [4e97da7c-af4b-40b3-83fb-82b6c2a2adef] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 20:26:52.952236   68084 system_pods.go:74] duration metric: took 20.81979ms to wait for pod list to return data ...
	I0829 20:26:52.952245   68084 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:26:52.961169   68084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:26:52.961202   68084 node_conditions.go:123] node cpu capacity is 2
	I0829 20:26:52.961214   68084 node_conditions.go:105] duration metric: took 8.963546ms to run NodePressure ...
	I0829 20:26:52.961234   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:53.425201   68084 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 20:26:53.429605   68084 kubeadm.go:739] kubelet initialised
	I0829 20:26:53.429625   68084 kubeadm.go:740] duration metric: took 4.401784ms waiting for restarted kubelet to initialise ...
	I0829 20:26:53.429632   68084 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:26:53.434501   68084 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-5mkxp" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:55.442290   68084 pod_ready.go:103] pod "coredns-6f6b679f8f-5mkxp" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:54.248998   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:54.748438   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:55.249066   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:55.749293   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:56.248457   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:56.748509   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:57.248949   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:57.748228   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:58.248717   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:58.748412   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:54.760175   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:54.760689   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:54.760736   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:54.760667   68926 retry.go:31] will retry after 3.651969786s: waiting for machine to come up
	I0829 20:26:58.415601   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.416019   66841 main.go:141] libmachine: (no-preload-397724) Found IP for machine: 192.168.50.214
	I0829 20:26:58.416045   66841 main.go:141] libmachine: (no-preload-397724) Reserving static IP address...
	I0829 20:26:58.416063   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has current primary IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.416507   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "no-preload-397724", mac: "52:54:00:e9:bf:ac", ip: "192.168.50.214"} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.416533   66841 main.go:141] libmachine: (no-preload-397724) DBG | skip adding static IP to network mk-no-preload-397724 - found existing host DHCP lease matching {name: "no-preload-397724", mac: "52:54:00:e9:bf:ac", ip: "192.168.50.214"}
	I0829 20:26:58.416543   66841 main.go:141] libmachine: (no-preload-397724) Reserved static IP address: 192.168.50.214
	I0829 20:26:58.416552   66841 main.go:141] libmachine: (no-preload-397724) Waiting for SSH to be available...
	I0829 20:26:58.416562   66841 main.go:141] libmachine: (no-preload-397724) DBG | Getting to WaitForSSH function...
	I0829 20:26:58.418849   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.419170   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.419199   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.419312   66841 main.go:141] libmachine: (no-preload-397724) DBG | Using SSH client type: external
	I0829 20:26:58.419351   66841 main.go:141] libmachine: (no-preload-397724) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa (-rw-------)
	I0829 20:26:58.419397   66841 main.go:141] libmachine: (no-preload-397724) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:58.419414   66841 main.go:141] libmachine: (no-preload-397724) DBG | About to run SSH command:
	I0829 20:26:58.419444   66841 main.go:141] libmachine: (no-preload-397724) DBG | exit 0
	I0829 20:26:58.542594   66841 main.go:141] libmachine: (no-preload-397724) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:58.542925   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetConfigRaw
	I0829 20:26:58.543582   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetIP
	I0829 20:26:58.546057   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.546384   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.546422   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.546691   66841 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/config.json ...
	I0829 20:26:58.546871   66841 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:58.546890   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:58.547113   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:58.549493   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.549816   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.549854   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.549972   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:58.550140   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.550260   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.550388   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:58.550581   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:58.550805   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:58.550822   66841 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:58.658784   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:58.658827   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:26:58.659063   66841 buildroot.go:166] provisioning hostname "no-preload-397724"
	I0829 20:26:58.659083   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:26:58.659220   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:58.661932   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.662294   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.662320   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.662485   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:58.662695   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.662880   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.663011   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:58.663168   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:58.663343   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:58.663356   66841 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-397724 && echo "no-preload-397724" | sudo tee /etc/hostname
	I0829 20:26:58.790591   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-397724
	
	I0829 20:26:58.790618   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:58.793294   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.793612   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.793639   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.793849   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:58.794035   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.794192   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.794289   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:58.794430   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:58.794656   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:58.794678   66841 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-397724' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-397724/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-397724' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:58.915925   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:58.915958   66841 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:58.915981   66841 buildroot.go:174] setting up certificates
	I0829 20:26:58.915991   66841 provision.go:84] configureAuth start
	I0829 20:26:58.916000   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:26:58.916279   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetIP
	I0829 20:26:58.919034   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.919385   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.919415   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.919523   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:58.921483   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.921805   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.921831   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.922015   66841 provision.go:143] copyHostCerts
	I0829 20:26:58.922062   66841 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:58.922079   66841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:58.922135   66841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:58.922242   66841 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:58.922256   66841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:58.922288   66841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:58.922365   66841 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:58.922375   66841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:58.922400   66841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:58.922491   66841 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.no-preload-397724 san=[127.0.0.1 192.168.50.214 localhost minikube no-preload-397724]
	I0829 20:26:55.206462   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:57.207175   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:59.207454   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:59.264390   66841 provision.go:177] copyRemoteCerts
	I0829 20:26:59.264446   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:59.264467   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.267259   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.267603   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.267626   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.267794   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.268014   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.268190   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.268367   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:26:59.353746   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:59.378289   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0829 20:26:59.402330   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 20:26:59.425412   66841 provision.go:87] duration metric: took 509.408381ms to configureAuth
	I0829 20:26:59.425442   66841 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:59.425616   66841 config.go:182] Loaded profile config "no-preload-397724": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:26:59.425679   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.428148   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.428503   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.428545   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.428698   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.428906   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.429077   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.429227   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.429365   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:59.429511   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:59.429524   66841 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:59.666382   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:59.666408   66841 machine.go:96] duration metric: took 1.11952301s to provisionDockerMachine
	I0829 20:26:59.666422   66841 start.go:293] postStartSetup for "no-preload-397724" (driver="kvm2")
	I0829 20:26:59.666436   66841 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:59.666458   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.666833   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:59.666881   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.669407   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.669725   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.669751   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.669888   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.670073   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.670214   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.670316   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:26:59.753440   66841 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:59.758408   66841 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:59.758431   66841 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:59.758509   66841 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:59.758632   66841 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:59.758753   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:59.768355   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:59.792742   66841 start.go:296] duration metric: took 126.308201ms for postStartSetup
	I0829 20:26:59.792782   66841 fix.go:56] duration metric: took 19.617155195s for fixHost
	I0829 20:26:59.792806   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.795380   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.795744   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.795781   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.795917   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.796124   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.796237   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.796376   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.796488   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:59.796668   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:59.796680   66841 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:59.903539   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963219.868600963
	
	I0829 20:26:59.903564   66841 fix.go:216] guest clock: 1724963219.868600963
	I0829 20:26:59.903574   66841 fix.go:229] Guest: 2024-08-29 20:26:59.868600963 +0000 UTC Remote: 2024-08-29 20:26:59.792787483 +0000 UTC m=+355.719318860 (delta=75.81348ms)
	I0829 20:26:59.903623   66841 fix.go:200] guest clock delta is within tolerance: 75.81348ms
	I0829 20:26:59.903632   66841 start.go:83] releasing machines lock for "no-preload-397724", held for 19.728042303s
	I0829 20:26:59.903676   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.903967   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetIP
	I0829 20:26:59.906798   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.907183   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.907212   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.907378   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.907804   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.907970   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.908038   66841 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:59.908072   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.908324   66841 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:59.908346   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.910843   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.911025   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.911187   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.911215   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.911325   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.911415   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.911437   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.911485   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.911640   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.911649   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.911847   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.911848   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:26:59.911978   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.912119   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:27:00.023116   66841 ssh_runner.go:195] Run: systemctl --version
	I0829 20:27:00.029346   66841 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:27:00.169122   66841 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:27:00.176823   66841 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:27:00.176913   66841 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:27:00.194795   66841 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:27:00.194836   66841 start.go:495] detecting cgroup driver to use...
	I0829 20:27:00.194906   66841 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:27:00.212145   66841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:27:00.226584   66841 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:27:00.226656   66841 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:27:00.240525   66841 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:27:00.256847   66841 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:27:00.371938   66841 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:27:00.516891   66841 docker.go:233] disabling docker service ...
	I0829 20:27:00.516964   66841 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:27:00.531127   66841 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:27:00.543483   66841 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:27:00.672033   66841 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:27:00.794828   66841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:27:00.809204   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:27:00.828484   66841 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 20:27:00.828547   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.839273   66841 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:27:00.839344   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.850336   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.860980   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.871661   66841 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:27:00.884343   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.895190   66841 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.912700   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.923383   66841 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:27:00.934168   66841 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:27:00.934231   66841 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:27:00.948181   66841 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:27:00.959121   66841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:27:01.072055   66841 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:27:01.163024   66841 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:27:01.163104   66841 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:27:01.167949   66841 start.go:563] Will wait 60s for crictl version
	I0829 20:27:01.168011   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.171707   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:27:01.212950   66841 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:27:01.213031   66841 ssh_runner.go:195] Run: crio --version
	I0829 20:27:01.242181   66841 ssh_runner.go:195] Run: crio --version
	I0829 20:27:01.276389   66841 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 20:26:57.441729   68084 pod_ready.go:93] pod "coredns-6f6b679f8f-5mkxp" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:57.441753   68084 pod_ready.go:82] duration metric: took 4.007206558s for pod "coredns-6f6b679f8f-5mkxp" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:57.441762   68084 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:59.448210   68084 pod_ready.go:103] pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:59.248692   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:59.748815   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:00.248257   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:00.748264   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:01.249241   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:01.748894   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:02.249045   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:02.748765   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:03.248902   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:03.748333   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:01.277829   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetIP
	I0829 20:27:01.280762   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:27:01.281144   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:27:01.281171   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:27:01.281367   66841 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0829 20:27:01.285714   66841 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:27:01.297903   66841 kubeadm.go:883] updating cluster {Name:no-preload-397724 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-397724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:27:01.298010   66841 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:27:01.298041   66841 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:27:01.331474   66841 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 20:27:01.331498   66841 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 20:27:01.331566   66841 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:01.331572   66841 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.331609   66841 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.331632   66841 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.331643   66841 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.331615   66841 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0829 20:27:01.331737   66841 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.331758   66841 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.333182   66841 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.333233   66841 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.333206   66841 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.333195   66841 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.333191   66841 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:01.333278   66841 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.333191   66841 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.333333   66841 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0829 20:27:01.507028   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.514096   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.526653   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.530292   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.531828   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.534432   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.550465   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0829 20:27:01.613161   66841 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0829 20:27:01.613209   66841 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.613287   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.631193   66841 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0829 20:27:01.631236   66841 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.631285   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.687868   66841 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0829 20:27:01.687911   66841 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.687967   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.700369   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:01.713036   66841 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0829 20:27:01.713102   66841 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.713159   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.722934   66841 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0829 20:27:01.722991   66841 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.723042   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.722941   66841 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0829 20:27:01.723130   66841 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.723159   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.785242   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.785246   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.785342   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.785391   66841 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0829 20:27:01.785438   66841 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:01.785450   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.785474   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.785479   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.785534   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.925322   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.925371   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.925374   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.925474   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.925518   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.925569   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.925593   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:02.072628   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:02.072690   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:02.072744   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:02.072822   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:02.072867   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:02.176999   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0829 20:27:02.177031   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:02.177503   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:02.177507   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 20:27:02.177572   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0829 20:27:02.177581   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0829 20:27:02.177678   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0829 20:27:02.177682   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 20:27:02.185515   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0829 20:27:02.185585   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:02.185624   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0829 20:27:02.259015   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0829 20:27:02.259076   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0829 20:27:02.259087   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0829 20:27:02.259106   66841 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 20:27:02.259113   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0829 20:27:02.259138   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0829 20:27:02.259147   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 20:27:02.259155   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 20:27:02.259152   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0829 20:27:02.259139   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0829 20:27:02.259157   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 20:27:02.259240   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0829 20:27:01.208076   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:03.208339   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:01.954153   68084 pod_ready.go:103] pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:03.454991   68084 pod_ready.go:93] pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:03.455023   68084 pod_ready.go:82] duration metric: took 6.013253793s for pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:03.455036   68084 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:05.461938   68084 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:04.249082   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:04.748738   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:05.248398   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:05.749056   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:06.248693   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:06.748904   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:07.249145   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:07.749131   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:08.248774   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:08.748444   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:04.630344   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.371149915s)
	I0829 20:27:04.630373   66841 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0: (2.371188324s)
	I0829 20:27:04.630410   66841 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.371191825s)
	I0829 20:27:04.630432   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0829 20:27:04.630413   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0829 20:27:04.630379   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0829 20:27:04.630465   66841 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.371187188s)
	I0829 20:27:04.630478   66841 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 20:27:04.630481   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0829 20:27:04.630561   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 20:27:06.684986   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.054398317s)
	I0829 20:27:06.685019   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0829 20:27:06.685047   66841 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0829 20:27:06.685098   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0829 20:27:05.707657   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:07.708034   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:06.965873   68084 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:06.965904   68084 pod_ready.go:82] duration metric: took 3.51085868s for pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.965918   68084 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.976464   68084 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:06.976489   68084 pod_ready.go:82] duration metric: took 10.562771ms for pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.976502   68084 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b4ffx" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.982178   68084 pod_ready.go:93] pod "kube-proxy-b4ffx" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:06.982197   68084 pod_ready.go:82] duration metric: took 5.687889ms for pod "kube-proxy-b4ffx" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.982205   68084 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.987316   68084 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:06.987333   68084 pod_ready.go:82] duration metric: took 5.122275ms for pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.987342   68084 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:08.994794   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:11.493940   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:09.248746   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:09.748722   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:10.249074   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:10.748647   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:11.248236   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:11.749057   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:12.249227   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:12.748688   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:13.249248   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:13.749298   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:10.365120   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.679993065s)
	I0829 20:27:10.365150   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0829 20:27:10.365182   66841 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0829 20:27:10.365256   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0829 20:27:12.122371   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.757087653s)
	I0829 20:27:12.122409   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0829 20:27:12.122434   66841 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 20:27:12.122564   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 20:27:13.575108   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.45251018s)
	I0829 20:27:13.575137   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0829 20:27:13.575165   66841 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 20:27:13.575210   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 20:27:09.708364   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:11.708491   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:14.207383   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:13.494124   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:15.993564   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:14.249254   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:14.748957   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:15.249229   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:15.749137   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:16.248967   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:16.748254   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:17.248929   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:17.748339   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:18.248666   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:18.748712   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:15.742286   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.16705417s)
	I0829 20:27:15.742320   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0829 20:27:15.742348   66841 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0829 20:27:15.742398   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0829 20:27:16.391977   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0829 20:27:16.392017   66841 cache_images.go:123] Successfully loaded all cached images
	I0829 20:27:16.392022   66841 cache_images.go:92] duration metric: took 15.060512795s to LoadCachedImages
	I0829 20:27:16.392034   66841 kubeadm.go:934] updating node { 192.168.50.214 8443 v1.31.0 crio true true} ...
	I0829 20:27:16.392139   66841 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-397724 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-397724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:27:16.392203   66841 ssh_runner.go:195] Run: crio config
	I0829 20:27:16.445382   66841 cni.go:84] Creating CNI manager for ""
	I0829 20:27:16.445406   66841 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:27:16.445420   66841 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:27:16.445448   66841 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.214 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-397724 NodeName:no-preload-397724 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 20:27:16.445612   66841 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-397724"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:27:16.445671   66841 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 20:27:16.456505   66841 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:27:16.456560   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:27:16.467361   66841 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0829 20:27:16.484700   66841 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:27:16.503026   66841 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0829 20:27:16.519867   66841 ssh_runner.go:195] Run: grep 192.168.50.214	control-plane.minikube.internal$ /etc/hosts
	I0829 20:27:16.523648   66841 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:27:16.535642   66841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:27:16.671027   66841 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:27:16.688692   66841 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724 for IP: 192.168.50.214
	I0829 20:27:16.688712   66841 certs.go:194] generating shared ca certs ...
	I0829 20:27:16.688727   66841 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:27:16.688883   66841 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:27:16.688944   66841 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:27:16.688957   66841 certs.go:256] generating profile certs ...
	I0829 20:27:16.689053   66841 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/client.key
	I0829 20:27:16.689132   66841 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/apiserver.key.1f535ae9
	I0829 20:27:16.689182   66841 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/proxy-client.key
	I0829 20:27:16.689360   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:27:16.689400   66841 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:27:16.689415   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:27:16.689450   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:27:16.689504   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:27:16.689540   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:27:16.689596   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:27:16.690277   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:27:16.747582   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:27:16.782064   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:27:16.816382   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:27:16.851548   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0829 20:27:16.882919   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 20:27:16.907439   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:27:16.932392   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 20:27:16.957451   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:27:16.982482   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:27:17.006032   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:27:17.030052   66841 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:27:17.047792   66841 ssh_runner.go:195] Run: openssl version
	I0829 20:27:17.053922   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:27:17.065219   66841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:27:17.069592   66841 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:27:17.069647   66841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:27:17.075853   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:27:17.086727   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:27:17.097935   66841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:27:17.102198   66841 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:27:17.102252   66841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:27:17.108031   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:27:17.119868   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:27:17.131513   66841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:27:17.136434   66841 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:27:17.136497   66841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:27:17.142219   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:27:17.153448   66841 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:27:17.158375   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:27:17.165156   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:27:17.170927   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:27:17.176669   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:27:17.182293   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:27:17.187936   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:27:17.193572   66841 kubeadm.go:392] StartCluster: {Name:no-preload-397724 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-397724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:27:17.193682   66841 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:27:17.193754   66841 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:27:17.238327   66841 cri.go:89] found id: ""
	I0829 20:27:17.238392   66841 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:27:17.248923   66841 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:27:17.248943   66841 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:27:17.248984   66841 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:27:17.263143   66841 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:27:17.264260   66841 kubeconfig.go:125] found "no-preload-397724" server: "https://192.168.50.214:8443"
	I0829 20:27:17.266448   66841 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:27:17.276347   66841 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.214
	I0829 20:27:17.276378   66841 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:27:17.276389   66841 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:27:17.276440   66841 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:27:17.311409   66841 cri.go:89] found id: ""
	I0829 20:27:17.311476   66841 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:27:17.329204   66841 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:27:17.339063   66841 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:27:17.339079   66841 kubeadm.go:157] found existing configuration files:
	
	I0829 20:27:17.339118   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:27:17.348268   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:27:17.348324   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:27:17.357596   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:27:17.366504   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:27:17.366575   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:27:17.376068   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:27:17.385156   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:27:17.385220   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:27:17.394890   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:27:17.404213   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:27:17.404283   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:27:17.413669   66841 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:27:17.423307   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:17.536003   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:17.990605   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:18.217809   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:18.297100   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:18.421185   66841 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:27:18.421283   66841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:18.922043   66841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:16.209618   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:18.707544   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:17.993609   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:19.994469   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:19.248924   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:19.748958   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:20.248851   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:20.748547   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:21.248298   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:21.748802   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:22.248680   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:22.748271   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:23.248491   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:23.748803   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:19.422030   66841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:19.442023   66841 api_server.go:72] duration metric: took 1.020839747s to wait for apiserver process to appear ...
	I0829 20:27:19.442047   66841 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:27:19.442070   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:22.444156   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:27:22.444192   66841 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:27:22.444211   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:22.466228   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:27:22.466258   66841 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:27:22.942835   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:22.949338   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:27:22.949360   66841 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:27:23.443069   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:23.447845   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:27:23.447876   66841 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:27:23.942372   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:23.946517   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 200:
	ok
	I0829 20:27:23.953497   66841 api_server.go:141] control plane version: v1.31.0
	I0829 20:27:23.953522   66841 api_server.go:131] duration metric: took 4.511467637s to wait for apiserver health ...
	I0829 20:27:23.953530   66841 cni.go:84] Creating CNI manager for ""
	I0829 20:27:23.953536   66841 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:27:23.955180   66841 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:27:23.956396   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:27:23.969429   66841 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:27:24.000989   66841 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:27:24.014200   66841 system_pods.go:59] 8 kube-system pods found
	I0829 20:27:24.014233   66841 system_pods.go:61] "coredns-6f6b679f8f-g7xxs" [f0148527-2146-4153-aa20-5ac97b664027] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:27:24.014240   66841 system_pods.go:61] "etcd-no-preload-397724" [f04b5ee4-f439-470a-b298-1a9ed569db70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 20:27:24.014248   66841 system_pods.go:61] "kube-apiserver-no-preload-397724" [2328f327-1744-4785-9266-3f992b977ef8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 20:27:24.014254   66841 system_pods.go:61] "kube-controller-manager-no-preload-397724" [0e63f04d-8627-45e9-ac80-70a0fe63f5db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 20:27:24.014260   66841 system_pods.go:61] "kube-proxy-57kbt" [9f85ce17-85a0-4a52-bdaf-4e3aee4d1a98] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 20:27:24.014267   66841 system_pods.go:61] "kube-scheduler-no-preload-397724" [106821c6-2444-470a-bac1-78838c0b1982] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 20:27:24.014273   66841 system_pods.go:61] "metrics-server-6867b74b74-668dg" [e3f3ab24-7777-40b0-a54c-00a294e7e68e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:27:24.014280   66841 system_pods.go:61] "storage-provisioner" [146bd02a-8f50-4d19-a188-4adc2bcc0a43] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 20:27:24.014288   66841 system_pods.go:74] duration metric: took 13.275941ms to wait for pod list to return data ...
	I0829 20:27:24.014298   66841 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:27:24.018932   66841 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:27:24.018956   66841 node_conditions.go:123] node cpu capacity is 2
	I0829 20:27:24.018966   66841 node_conditions.go:105] duration metric: took 4.661993ms to run NodePressure ...
	I0829 20:27:24.018981   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:21.207144   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:23.208728   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:22.493988   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:24.494152   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:24.248456   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:24.748347   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:25.248337   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:25.748905   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:26.248912   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:26.749302   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:27.249058   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:27.749105   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:28.248548   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:28.748298   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:24.305237   66841 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 20:27:24.310640   66841 kubeadm.go:739] kubelet initialised
	I0829 20:27:24.310666   66841 kubeadm.go:740] duration metric: took 5.402212ms waiting for restarted kubelet to initialise ...
	I0829 20:27:24.310679   66841 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:27:24.316568   66841 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:26.325035   66841 pod_ready.go:103] pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:28.336627   66841 pod_ready.go:103] pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:25.706496   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:27.708228   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:26.992949   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:28.993682   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:30.993877   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:29.248994   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:29.749020   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:30.248983   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:30.748247   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:31.249052   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:31.249133   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:31.293442   67607 cri.go:89] found id: ""
	I0829 20:27:31.293466   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.293473   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:31.293479   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:31.293527   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:31.333976   67607 cri.go:89] found id: ""
	I0829 20:27:31.333999   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.334006   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:31.334011   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:31.334055   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:31.373680   67607 cri.go:89] found id: ""
	I0829 20:27:31.373707   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.373715   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:31.373720   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:31.373766   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:31.407798   67607 cri.go:89] found id: ""
	I0829 20:27:31.407824   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.407832   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:31.407837   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:31.407893   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:31.444409   67607 cri.go:89] found id: ""
	I0829 20:27:31.444437   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.444445   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:31.444451   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:31.444512   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:31.479313   67607 cri.go:89] found id: ""
	I0829 20:27:31.479333   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.479341   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:31.479347   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:31.479403   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:31.516056   67607 cri.go:89] found id: ""
	I0829 20:27:31.516089   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.516100   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:31.516108   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:31.516168   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:31.555324   67607 cri.go:89] found id: ""
	I0829 20:27:31.555349   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.555357   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:31.555365   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:31.555375   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:31.626397   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:31.626434   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:31.672006   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:31.672038   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:31.724691   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:31.724727   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:31.740283   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:31.740324   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:31.874007   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:29.824509   66841 pod_ready.go:93] pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:29.824530   66841 pod_ready.go:82] duration metric: took 5.507939145s for pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:29.824547   66841 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:31.833646   66841 pod_ready.go:103] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:30.207213   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:32.706352   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:32.993932   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:35.494511   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:34.374203   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:34.387817   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:34.387888   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:34.423254   67607 cri.go:89] found id: ""
	I0829 20:27:34.423279   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.423286   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:34.423296   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:34.423343   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:34.457741   67607 cri.go:89] found id: ""
	I0829 20:27:34.457768   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.457775   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:34.457781   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:34.457827   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:34.498432   67607 cri.go:89] found id: ""
	I0829 20:27:34.498457   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.498464   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:34.498469   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:34.498523   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:34.534290   67607 cri.go:89] found id: ""
	I0829 20:27:34.534317   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.534324   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:34.534330   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:34.534380   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:34.570878   67607 cri.go:89] found id: ""
	I0829 20:27:34.570909   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.570919   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:34.570928   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:34.570986   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:34.615735   67607 cri.go:89] found id: ""
	I0829 20:27:34.615762   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.615769   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:34.615775   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:34.615824   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:34.656667   67607 cri.go:89] found id: ""
	I0829 20:27:34.656706   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.656721   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:34.656730   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:34.656779   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:34.708906   67607 cri.go:89] found id: ""
	I0829 20:27:34.708928   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.708937   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:34.708947   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:34.708962   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:34.767382   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:34.767417   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:34.786523   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:34.786574   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:34.872832   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:34.872857   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:34.872871   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:34.954581   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:34.954620   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:37.497810   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:37.511479   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:37.511539   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:37.547930   67607 cri.go:89] found id: ""
	I0829 20:27:37.547962   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.547972   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:37.547980   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:37.548035   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:37.585281   67607 cri.go:89] found id: ""
	I0829 20:27:37.585304   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.585312   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:37.585318   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:37.585365   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:37.622201   67607 cri.go:89] found id: ""
	I0829 20:27:37.622229   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.622241   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:37.622246   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:37.622295   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:37.657248   67607 cri.go:89] found id: ""
	I0829 20:27:37.657274   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.657281   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:37.657289   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:37.657335   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:37.691674   67607 cri.go:89] found id: ""
	I0829 20:27:37.691703   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.691711   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:37.691716   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:37.691764   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:37.729523   67607 cri.go:89] found id: ""
	I0829 20:27:37.729548   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.729557   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:37.729562   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:37.729609   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:37.764601   67607 cri.go:89] found id: ""
	I0829 20:27:37.764629   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.764637   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:37.764643   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:37.764705   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:37.799228   67607 cri.go:89] found id: ""
	I0829 20:27:37.799259   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.799270   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:37.799281   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:37.799301   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:37.848128   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:37.848158   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:37.862610   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:37.862640   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:37.936859   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:37.936888   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:37.936903   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:38.013647   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:38.013681   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:34.331889   66841 pod_ready.go:103] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:36.332334   66841 pod_ready.go:103] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:37.329545   66841 pod_ready.go:93] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.329566   66841 pod_ready.go:82] duration metric: took 7.50501178s for pod "etcd-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.329576   66841 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.333442   66841 pod_ready.go:93] pod "kube-apiserver-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.333458   66841 pod_ready.go:82] duration metric: took 3.876755ms for pod "kube-apiserver-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.333467   66841 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.336952   66841 pod_ready.go:93] pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.336968   66841 pod_ready.go:82] duration metric: took 3.49531ms for pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.336976   66841 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-57kbt" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.340368   66841 pod_ready.go:93] pod "kube-proxy-57kbt" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.340383   66841 pod_ready.go:82] duration metric: took 3.401844ms for pod "kube-proxy-57kbt" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.340396   66841 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.344111   66841 pod_ready.go:93] pod "kube-scheduler-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.344125   66841 pod_ready.go:82] duration metric: took 3.723924ms for pod "kube-scheduler-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.344132   66841 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:34.708682   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:37.206876   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:37.997827   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:40.494840   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:40.551395   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:40.568100   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:40.568181   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:40.616582   67607 cri.go:89] found id: ""
	I0829 20:27:40.616611   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.616623   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:40.616631   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:40.616695   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:40.690580   67607 cri.go:89] found id: ""
	I0829 20:27:40.690620   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.690631   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:40.690638   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:40.690695   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:40.733624   67607 cri.go:89] found id: ""
	I0829 20:27:40.733653   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.733662   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:40.733670   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:40.733733   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:40.767499   67607 cri.go:89] found id: ""
	I0829 20:27:40.767528   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.767538   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:40.767546   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:40.767619   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:40.806973   67607 cri.go:89] found id: ""
	I0829 20:27:40.807002   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.807009   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:40.807015   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:40.807079   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:40.842311   67607 cri.go:89] found id: ""
	I0829 20:27:40.842334   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.842341   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:40.842347   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:40.842401   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:40.880208   67607 cri.go:89] found id: ""
	I0829 20:27:40.880238   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.880248   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:40.880255   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:40.880309   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:40.918395   67607 cri.go:89] found id: ""
	I0829 20:27:40.918424   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.918435   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:40.918445   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:40.918459   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:40.972396   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:40.972437   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:40.986136   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:40.986169   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:41.064600   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:41.064623   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:41.064634   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:41.146653   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:41.146687   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:43.687773   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:43.701576   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:43.701645   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:43.737259   67607 cri.go:89] found id: ""
	I0829 20:27:43.737282   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.737289   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:43.737299   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:43.737346   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:43.772678   67607 cri.go:89] found id: ""
	I0829 20:27:43.772702   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.772709   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:43.772714   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:43.772776   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:43.806788   67607 cri.go:89] found id: ""
	I0829 20:27:43.806821   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.806831   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:43.806839   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:43.806900   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:39.350484   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:41.352279   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:43.850564   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:39.707977   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:42.207630   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:42.993571   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:44.994696   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:43.841738   67607 cri.go:89] found id: ""
	I0829 20:27:43.841759   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.841767   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:43.841772   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:43.841829   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:43.878420   67607 cri.go:89] found id: ""
	I0829 20:27:43.878449   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.878459   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:43.878466   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:43.878527   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:43.914307   67607 cri.go:89] found id: ""
	I0829 20:27:43.914335   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.914345   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:43.914352   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:43.914413   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:43.958827   67607 cri.go:89] found id: ""
	I0829 20:27:43.958853   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.958865   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:43.958871   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:43.958935   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:43.997397   67607 cri.go:89] found id: ""
	I0829 20:27:43.997423   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.997432   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:43.997442   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:43.997455   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:44.049245   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:44.049280   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:44.063473   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:44.063511   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:44.131628   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:44.131651   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:44.131666   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:44.210826   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:44.210854   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:46.754905   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:46.769531   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:46.769588   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:46.805245   67607 cri.go:89] found id: ""
	I0829 20:27:46.805272   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.805280   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:46.805285   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:46.805338   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:46.843606   67607 cri.go:89] found id: ""
	I0829 20:27:46.843637   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.843646   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:46.843654   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:46.843710   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:46.880300   67607 cri.go:89] found id: ""
	I0829 20:27:46.880326   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.880333   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:46.880338   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:46.880387   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:46.923537   67607 cri.go:89] found id: ""
	I0829 20:27:46.923562   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.923569   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:46.923574   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:46.923620   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:46.957774   67607 cri.go:89] found id: ""
	I0829 20:27:46.957806   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.957817   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:46.957826   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:46.957887   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:46.996972   67607 cri.go:89] found id: ""
	I0829 20:27:46.996995   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.997005   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:46.997013   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:46.997056   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:47.030560   67607 cri.go:89] found id: ""
	I0829 20:27:47.030588   67607 logs.go:276] 0 containers: []
	W0829 20:27:47.030606   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:47.030612   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:47.030665   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:47.068654   67607 cri.go:89] found id: ""
	I0829 20:27:47.068678   67607 logs.go:276] 0 containers: []
	W0829 20:27:47.068686   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:47.068694   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:47.068706   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:47.082335   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:47.082367   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:47.162792   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:47.162817   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:47.162829   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:47.241456   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:47.241491   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:47.282249   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:47.282274   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:45.850673   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:47.850836   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:44.707198   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:46.707222   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:49.207556   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:46.995302   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:49.498812   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:49.836268   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:49.850415   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:49.850491   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:49.887816   67607 cri.go:89] found id: ""
	I0829 20:27:49.887843   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.887851   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:49.887856   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:49.887916   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:49.923701   67607 cri.go:89] found id: ""
	I0829 20:27:49.923735   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.923745   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:49.923755   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:49.923818   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:49.958197   67607 cri.go:89] found id: ""
	I0829 20:27:49.958225   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.958236   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:49.958244   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:49.958313   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:49.995333   67607 cri.go:89] found id: ""
	I0829 20:27:49.995361   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.995373   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:49.995380   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:49.995439   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:50.034345   67607 cri.go:89] found id: ""
	I0829 20:27:50.034375   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.034382   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:50.034387   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:50.034438   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:50.070324   67607 cri.go:89] found id: ""
	I0829 20:27:50.070355   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.070365   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:50.070374   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:50.070434   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:50.107301   67607 cri.go:89] found id: ""
	I0829 20:27:50.107326   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.107334   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:50.107340   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:50.107400   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:50.144748   67607 cri.go:89] found id: ""
	I0829 20:27:50.144778   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.144788   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:50.144800   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:50.144816   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:50.183576   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:50.183606   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:50.236716   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:50.236750   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:50.251589   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:50.251612   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:50.317816   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:50.317840   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:50.317855   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:52.894572   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:52.908081   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:52.908149   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:52.945272   67607 cri.go:89] found id: ""
	I0829 20:27:52.945299   67607 logs.go:276] 0 containers: []
	W0829 20:27:52.945309   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:52.945317   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:52.945377   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:52.980237   67607 cri.go:89] found id: ""
	I0829 20:27:52.980262   67607 logs.go:276] 0 containers: []
	W0829 20:27:52.980270   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:52.980275   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:52.980325   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:53.017894   67607 cri.go:89] found id: ""
	I0829 20:27:53.017922   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.017929   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:53.017935   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:53.017991   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:53.052577   67607 cri.go:89] found id: ""
	I0829 20:27:53.052603   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.052611   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:53.052616   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:53.052667   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:53.093414   67607 cri.go:89] found id: ""
	I0829 20:27:53.093444   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.093455   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:53.093462   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:53.093523   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:53.130794   67607 cri.go:89] found id: ""
	I0829 20:27:53.130825   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.130837   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:53.130845   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:53.130902   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:53.163793   67607 cri.go:89] found id: ""
	I0829 20:27:53.163819   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.163827   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:53.163832   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:53.163882   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:53.204824   67607 cri.go:89] found id: ""
	I0829 20:27:53.204852   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.204862   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:53.204872   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:53.204885   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:53.243411   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:53.243440   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:53.296611   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:53.296642   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:53.310909   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:53.310943   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:53.385768   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:53.385790   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:53.385801   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:49.851712   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:52.350295   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:51.711115   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:54.207340   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:51.993943   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:53.996334   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:56.494226   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:55.966801   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:55.980852   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:55.980933   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:56.017682   67607 cri.go:89] found id: ""
	I0829 20:27:56.017707   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.017716   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:56.017722   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:56.017767   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:56.051556   67607 cri.go:89] found id: ""
	I0829 20:27:56.051584   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.051594   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:56.051600   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:56.051665   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:56.095301   67607 cri.go:89] found id: ""
	I0829 20:27:56.095330   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.095340   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:56.095348   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:56.095408   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:56.131161   67607 cri.go:89] found id: ""
	I0829 20:27:56.131195   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.131205   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:56.131213   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:56.131269   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:56.166611   67607 cri.go:89] found id: ""
	I0829 20:27:56.166637   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.166645   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:56.166651   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:56.166713   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:56.202818   67607 cri.go:89] found id: ""
	I0829 20:27:56.202846   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.202856   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:56.202864   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:56.202923   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:56.237855   67607 cri.go:89] found id: ""
	I0829 20:27:56.237883   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.237891   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:56.237897   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:56.237955   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:56.272402   67607 cri.go:89] found id: ""
	I0829 20:27:56.272426   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.272433   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:56.272441   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:56.272452   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:56.351628   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:56.351653   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:56.389525   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:56.389559   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:56.444952   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:56.444989   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:56.459731   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:56.459759   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:56.536888   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:54.350358   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:56.350727   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:58.352884   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:56.208050   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:58.706897   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:58.993153   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:00.993544   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:59.037744   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:59.051868   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:59.051938   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:59.087436   67607 cri.go:89] found id: ""
	I0829 20:27:59.087461   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.087467   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:59.087474   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:59.087531   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:59.123729   67607 cri.go:89] found id: ""
	I0829 20:27:59.123757   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.123765   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:59.123771   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:59.123825   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:59.168649   67607 cri.go:89] found id: ""
	I0829 20:27:59.168682   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.168690   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:59.168696   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:59.168753   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:59.209770   67607 cri.go:89] found id: ""
	I0829 20:27:59.209791   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.209803   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:59.209808   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:59.209854   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:59.248358   67607 cri.go:89] found id: ""
	I0829 20:27:59.248384   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.248392   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:59.248398   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:59.248445   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:59.281770   67607 cri.go:89] found id: ""
	I0829 20:27:59.281797   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.281805   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:59.281811   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:59.281870   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:59.317255   67607 cri.go:89] found id: ""
	I0829 20:27:59.317285   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.317295   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:59.317302   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:59.317363   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:59.354301   67607 cri.go:89] found id: ""
	I0829 20:27:59.354324   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.354332   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:59.354339   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:59.354352   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:59.438346   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:59.438382   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:59.482482   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:59.482513   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:59.540926   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:59.540961   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:59.555221   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:59.555258   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:59.622114   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:02.123276   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:02.137435   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:02.137502   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:02.176310   67607 cri.go:89] found id: ""
	I0829 20:28:02.176340   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.176347   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:02.176355   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:02.176414   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:02.216511   67607 cri.go:89] found id: ""
	I0829 20:28:02.216555   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.216562   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:02.216574   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:02.216625   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:02.260116   67607 cri.go:89] found id: ""
	I0829 20:28:02.260149   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.260158   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:02.260164   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:02.260225   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:02.301550   67607 cri.go:89] found id: ""
	I0829 20:28:02.301584   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.301600   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:02.301608   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:02.301692   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:02.335916   67607 cri.go:89] found id: ""
	I0829 20:28:02.335948   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.335959   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:02.335967   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:02.336033   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:02.372479   67607 cri.go:89] found id: ""
	I0829 20:28:02.372507   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.372515   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:02.372522   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:02.372584   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:02.406683   67607 cri.go:89] found id: ""
	I0829 20:28:02.406713   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.406721   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:02.406727   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:02.406774   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:02.443130   67607 cri.go:89] found id: ""
	I0829 20:28:02.443156   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.443164   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:02.443173   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:02.443185   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:02.485747   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:02.485777   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:02.540106   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:02.540143   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:02.556158   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:02.556188   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:02.637870   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:02.637900   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:02.637915   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:00.851416   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:03.351248   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:00.707716   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:02.708204   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:02.994108   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:04.994988   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:05.220330   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:05.233932   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:05.233994   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:05.269046   67607 cri.go:89] found id: ""
	I0829 20:28:05.269072   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.269081   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:05.269087   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:05.269134   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:05.303963   67607 cri.go:89] found id: ""
	I0829 20:28:05.303989   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.303999   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:05.304006   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:05.304065   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:05.340943   67607 cri.go:89] found id: ""
	I0829 20:28:05.340975   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.340985   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:05.340992   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:05.341061   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:05.379551   67607 cri.go:89] found id: ""
	I0829 20:28:05.379582   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.379593   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:05.379601   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:05.379659   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:05.414229   67607 cri.go:89] found id: ""
	I0829 20:28:05.414256   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.414267   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:05.414274   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:05.414339   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:05.450212   67607 cri.go:89] found id: ""
	I0829 20:28:05.450241   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.450251   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:05.450258   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:05.450318   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:05.487415   67607 cri.go:89] found id: ""
	I0829 20:28:05.487451   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.487463   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:05.487470   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:05.487529   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:05.521347   67607 cri.go:89] found id: ""
	I0829 20:28:05.521370   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.521383   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:05.521390   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:05.521402   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:05.572317   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:05.572350   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:05.585651   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:05.585680   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:05.653929   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:05.653950   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:05.653969   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:05.732843   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:05.732873   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:08.281983   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:08.295104   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:08.295166   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:08.328570   67607 cri.go:89] found id: ""
	I0829 20:28:08.328596   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.328605   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:08.328613   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:08.328684   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:08.363567   67607 cri.go:89] found id: ""
	I0829 20:28:08.363595   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.363605   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:08.363613   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:08.363672   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:08.399619   67607 cri.go:89] found id: ""
	I0829 20:28:08.399645   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.399653   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:08.399659   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:08.399707   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:08.439252   67607 cri.go:89] found id: ""
	I0829 20:28:08.439283   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.439294   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:08.439301   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:08.439357   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:08.477730   67607 cri.go:89] found id: ""
	I0829 20:28:08.477754   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.477762   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:08.477768   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:08.477834   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:08.522045   67607 cri.go:89] found id: ""
	I0829 20:28:08.522066   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.522073   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:08.522079   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:08.522137   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:08.560400   67607 cri.go:89] found id: ""
	I0829 20:28:08.560427   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.560434   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:08.560441   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:08.560504   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:08.599111   67607 cri.go:89] found id: ""
	I0829 20:28:08.599140   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.599150   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:08.599161   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:08.599175   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:08.681451   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:08.681487   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:08.722800   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:08.722835   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:08.779058   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:08.779089   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:08.796940   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:08.796963   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 20:28:05.852245   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:08.351402   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:04.708669   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:07.207124   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:07.493431   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:09.493794   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	W0829 20:28:08.868296   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:11.369316   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:11.384150   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:11.384225   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:11.418452   67607 cri.go:89] found id: ""
	I0829 20:28:11.418480   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.418488   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:11.418494   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:11.418555   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:11.451359   67607 cri.go:89] found id: ""
	I0829 20:28:11.451389   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.451400   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:11.451408   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:11.451481   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:11.488408   67607 cri.go:89] found id: ""
	I0829 20:28:11.488436   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.488446   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:11.488453   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:11.488510   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:11.528311   67607 cri.go:89] found id: ""
	I0829 20:28:11.528340   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.528351   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:11.528359   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:11.528412   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:11.571345   67607 cri.go:89] found id: ""
	I0829 20:28:11.571372   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.571382   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:11.571389   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:11.571454   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:11.606812   67607 cri.go:89] found id: ""
	I0829 20:28:11.606839   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.606850   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:11.606857   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:11.606918   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:11.652687   67607 cri.go:89] found id: ""
	I0829 20:28:11.652710   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.652717   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:11.652722   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:11.652781   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:11.687583   67607 cri.go:89] found id: ""
	I0829 20:28:11.687628   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.687645   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:11.687655   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:11.687673   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:11.727052   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:11.727086   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:11.779116   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:11.779155   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:11.792911   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:11.792949   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:11.868415   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:11.868443   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:11.868461   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:10.850225   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:13.351638   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:09.707347   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:11.709556   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:14.206996   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:11.994187   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:14.494457   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:14.447886   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:14.462144   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:14.462221   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:14.499160   67607 cri.go:89] found id: ""
	I0829 20:28:14.499185   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.499193   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:14.499200   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:14.499258   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:14.545736   67607 cri.go:89] found id: ""
	I0829 20:28:14.545764   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.545774   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:14.545780   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:14.545844   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:14.583626   67607 cri.go:89] found id: ""
	I0829 20:28:14.583664   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.583674   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:14.583682   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:14.583744   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:14.619876   67607 cri.go:89] found id: ""
	I0829 20:28:14.619909   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.619917   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:14.619923   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:14.619975   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:14.655750   67607 cri.go:89] found id: ""
	I0829 20:28:14.655778   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.655786   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:14.655791   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:14.655848   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:14.690759   67607 cri.go:89] found id: ""
	I0829 20:28:14.690785   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.690795   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:14.690800   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:14.690850   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:14.727238   67607 cri.go:89] found id: ""
	I0829 20:28:14.727269   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.727282   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:14.727289   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:14.727344   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:14.765962   67607 cri.go:89] found id: ""
	I0829 20:28:14.765996   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.766006   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:14.766017   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:14.766033   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:14.835749   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:14.835779   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:14.835797   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:14.914075   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:14.914112   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:14.952684   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:14.952712   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:15.004598   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:15.004635   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:17.518949   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:17.532175   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:17.532250   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:17.569943   67607 cri.go:89] found id: ""
	I0829 20:28:17.569971   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.569979   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:17.569985   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:17.570044   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:17.605472   67607 cri.go:89] found id: ""
	I0829 20:28:17.605502   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.605510   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:17.605515   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:17.605566   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:17.641568   67607 cri.go:89] found id: ""
	I0829 20:28:17.641593   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.641603   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:17.641610   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:17.641669   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:17.680870   67607 cri.go:89] found id: ""
	I0829 20:28:17.680895   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.680905   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:17.680916   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:17.680981   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:17.723546   67607 cri.go:89] found id: ""
	I0829 20:28:17.723576   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.723587   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:17.723594   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:17.723659   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:17.757934   67607 cri.go:89] found id: ""
	I0829 20:28:17.757962   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.757973   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:17.757980   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:17.758028   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:17.792641   67607 cri.go:89] found id: ""
	I0829 20:28:17.792670   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.792679   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:17.792685   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:17.792738   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:17.830776   67607 cri.go:89] found id: ""
	I0829 20:28:17.830800   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.830807   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:17.830815   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:17.830825   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:17.886331   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:17.886377   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:17.900111   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:17.900135   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:17.969538   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:17.969563   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:17.969577   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:18.050609   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:18.050649   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:15.850497   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:17.851663   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:16.707415   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:19.207313   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:16.994325   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:19.494247   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:20.590686   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:20.605066   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:20.605121   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:20.646028   67607 cri.go:89] found id: ""
	I0829 20:28:20.646058   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.646074   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:20.646082   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:20.646143   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:20.683433   67607 cri.go:89] found id: ""
	I0829 20:28:20.683469   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.683479   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:20.683487   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:20.683567   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:20.722737   67607 cri.go:89] found id: ""
	I0829 20:28:20.722765   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.722775   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:20.722782   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:20.722841   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:20.759777   67607 cri.go:89] found id: ""
	I0829 20:28:20.759800   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.759807   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:20.759812   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:20.759864   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:20.799142   67607 cri.go:89] found id: ""
	I0829 20:28:20.799164   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.799170   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:20.799176   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:20.799223   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:20.838331   67607 cri.go:89] found id: ""
	I0829 20:28:20.838357   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.838365   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:20.838371   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:20.838427   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:20.878066   67607 cri.go:89] found id: ""
	I0829 20:28:20.878099   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.878110   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:20.878117   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:20.878175   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:20.928940   67607 cri.go:89] found id: ""
	I0829 20:28:20.928966   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.928975   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:20.928982   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:20.928993   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:20.984435   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:20.984471   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:21.005860   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:21.005900   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:21.084092   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:21.084123   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:21.084138   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:21.165971   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:21.166009   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:23.705033   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:23.718332   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:23.718390   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:23.753594   67607 cri.go:89] found id: ""
	I0829 20:28:23.753625   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.753635   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:23.753650   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:23.753715   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:23.791840   67607 cri.go:89] found id: ""
	I0829 20:28:23.791864   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.791872   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:23.791878   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:23.791930   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:20.350028   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:22.350487   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:21.207839   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:23.707197   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:21.993965   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:23.994879   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:26.493735   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:23.837815   67607 cri.go:89] found id: ""
	I0829 20:28:23.837839   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.837846   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:23.837851   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:23.837908   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:23.873155   67607 cri.go:89] found id: ""
	I0829 20:28:23.873184   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.873194   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:23.873201   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:23.873265   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:23.908728   67607 cri.go:89] found id: ""
	I0829 20:28:23.908757   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.908768   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:23.908774   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:23.908834   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:23.946286   67607 cri.go:89] found id: ""
	I0829 20:28:23.946310   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.946320   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:23.946328   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:23.946392   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:23.983078   67607 cri.go:89] found id: ""
	I0829 20:28:23.983105   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.983115   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:23.983129   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:23.983190   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:24.020601   67607 cri.go:89] found id: ""
	I0829 20:28:24.020634   67607 logs.go:276] 0 containers: []
	W0829 20:28:24.020644   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:24.020654   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:24.020669   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:24.034438   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:24.034463   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:24.103209   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:24.103230   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:24.103243   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:24.182977   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:24.183016   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:24.224743   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:24.224834   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:26.781507   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:26.794301   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:26.794387   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:26.827218   67607 cri.go:89] found id: ""
	I0829 20:28:26.827243   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.827250   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:26.827257   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:26.827303   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:26.862643   67607 cri.go:89] found id: ""
	I0829 20:28:26.862673   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.862685   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:26.862693   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:26.862743   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:26.898127   67607 cri.go:89] found id: ""
	I0829 20:28:26.898159   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.898169   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:26.898177   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:26.898237   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:26.932119   67607 cri.go:89] found id: ""
	I0829 20:28:26.932146   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.932167   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:26.932174   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:26.932241   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:26.966380   67607 cri.go:89] found id: ""
	I0829 20:28:26.966413   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.966421   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:26.966427   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:26.966478   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:27.004350   67607 cri.go:89] found id: ""
	I0829 20:28:27.004372   67607 logs.go:276] 0 containers: []
	W0829 20:28:27.004379   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:27.004386   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:27.004436   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:27.041171   67607 cri.go:89] found id: ""
	I0829 20:28:27.041199   67607 logs.go:276] 0 containers: []
	W0829 20:28:27.041206   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:27.041212   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:27.041257   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:27.073993   67607 cri.go:89] found id: ""
	I0829 20:28:27.074031   67607 logs.go:276] 0 containers: []
	W0829 20:28:27.074041   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:27.074053   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:27.074066   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:27.148169   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:27.148199   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:27.148214   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:27.227174   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:27.227212   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:27.267180   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:27.267230   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:27.319034   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:27.319066   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:24.350754   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:26.850582   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:26.207974   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:28.707820   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:28.494090   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:30.994157   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:29.833497   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:29.846883   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:29.846951   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:29.884133   67607 cri.go:89] found id: ""
	I0829 20:28:29.884163   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.884175   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:29.884182   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:29.884247   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:29.917594   67607 cri.go:89] found id: ""
	I0829 20:28:29.917618   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.917628   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:29.917636   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:29.917696   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:29.952537   67607 cri.go:89] found id: ""
	I0829 20:28:29.952568   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.952576   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:29.952582   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:29.952630   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:29.988410   67607 cri.go:89] found id: ""
	I0829 20:28:29.988441   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.988448   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:29.988454   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:29.988511   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:30.026761   67607 cri.go:89] found id: ""
	I0829 20:28:30.026788   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.026796   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:30.026802   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:30.026861   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:30.063010   67607 cri.go:89] found id: ""
	I0829 20:28:30.063037   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.063046   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:30.063054   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:30.063109   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:30.098067   67607 cri.go:89] found id: ""
	I0829 20:28:30.098093   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.098101   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:30.098107   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:30.098161   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:30.132887   67607 cri.go:89] found id: ""
	I0829 20:28:30.132914   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.132921   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:30.132928   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:30.132940   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:30.184955   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:30.184990   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:30.198966   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:30.199004   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:30.268950   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:30.268977   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:30.268991   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:30.354222   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:30.354260   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:32.896554   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:32.911188   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:32.911271   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:32.945726   67607 cri.go:89] found id: ""
	I0829 20:28:32.945750   67607 logs.go:276] 0 containers: []
	W0829 20:28:32.945758   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:32.945773   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:32.945829   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:32.980234   67607 cri.go:89] found id: ""
	I0829 20:28:32.980267   67607 logs.go:276] 0 containers: []
	W0829 20:28:32.980275   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:32.980281   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:32.980329   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:33.019031   67607 cri.go:89] found id: ""
	I0829 20:28:33.019063   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.019071   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:33.019076   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:33.019126   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:33.056290   67607 cri.go:89] found id: ""
	I0829 20:28:33.056314   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.056322   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:33.056327   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:33.056391   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:33.090038   67607 cri.go:89] found id: ""
	I0829 20:28:33.090068   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.090078   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:33.090086   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:33.090152   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:33.125742   67607 cri.go:89] found id: ""
	I0829 20:28:33.125774   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.125782   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:33.125787   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:33.125849   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:33.159019   67607 cri.go:89] found id: ""
	I0829 20:28:33.159047   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.159058   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:33.159065   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:33.159125   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:33.197900   67607 cri.go:89] found id: ""
	I0829 20:28:33.197925   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.197933   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:33.197941   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:33.197955   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:33.250010   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:33.250040   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:33.263348   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:33.263374   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:33.342037   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:33.342065   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:33.342082   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:33.423324   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:33.423361   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:29.350275   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:31.350994   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:33.850866   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:30.713472   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:33.207271   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:32.995169   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:35.493980   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:35.963734   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:35.978648   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:35.978713   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:36.015326   67607 cri.go:89] found id: ""
	I0829 20:28:36.015350   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.015358   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:36.015364   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:36.015411   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:36.050840   67607 cri.go:89] found id: ""
	I0829 20:28:36.050869   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.050879   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:36.050886   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:36.050947   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:36.084048   67607 cri.go:89] found id: ""
	I0829 20:28:36.084076   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.084084   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:36.084090   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:36.084138   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:36.118655   67607 cri.go:89] found id: ""
	I0829 20:28:36.118682   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.118693   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:36.118702   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:36.118762   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:36.153879   67607 cri.go:89] found id: ""
	I0829 20:28:36.153908   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.153918   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:36.153926   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:36.153988   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:36.199834   67607 cri.go:89] found id: ""
	I0829 20:28:36.199858   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.199866   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:36.199872   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:36.199927   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:36.238098   67607 cri.go:89] found id: ""
	I0829 20:28:36.238129   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.238139   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:36.238146   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:36.238208   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:36.272091   67607 cri.go:89] found id: ""
	I0829 20:28:36.272124   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.272135   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:36.272146   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:36.272162   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:36.338478   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:36.338498   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:36.338510   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:36.418637   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:36.418671   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:36.458167   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:36.458194   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:36.508592   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:36.508630   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:36.351066   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:38.849684   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:35.706813   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:37.708058   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:38.003178   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:40.493065   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:39.022668   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:39.035897   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:39.035971   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:39.071155   67607 cri.go:89] found id: ""
	I0829 20:28:39.071185   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.071196   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:39.071203   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:39.071258   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:39.104135   67607 cri.go:89] found id: ""
	I0829 20:28:39.104177   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.104188   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:39.104206   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:39.104266   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:39.138301   67607 cri.go:89] found id: ""
	I0829 20:28:39.138329   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.138339   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:39.138346   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:39.138404   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:39.172674   67607 cri.go:89] found id: ""
	I0829 20:28:39.172700   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.172708   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:39.172719   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:39.172779   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:39.209810   67607 cri.go:89] found id: ""
	I0829 20:28:39.209836   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.209845   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:39.209852   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:39.209915   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:39.248692   67607 cri.go:89] found id: ""
	I0829 20:28:39.248715   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.248722   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:39.248728   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:39.248798   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:39.284303   67607 cri.go:89] found id: ""
	I0829 20:28:39.284333   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.284343   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:39.284351   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:39.284401   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:39.321346   67607 cri.go:89] found id: ""
	I0829 20:28:39.321375   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.321386   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:39.321396   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:39.321410   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:39.334678   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:39.334710   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:39.421992   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:39.422014   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:39.422027   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:39.503250   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:39.503280   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:39.540623   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:39.540654   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:42.092131   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:42.105440   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:42.105498   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:42.140994   67607 cri.go:89] found id: ""
	I0829 20:28:42.141024   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.141034   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:42.141042   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:42.141102   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:42.175182   67607 cri.go:89] found id: ""
	I0829 20:28:42.175217   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.175228   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:42.175248   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:42.175319   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:42.209251   67607 cri.go:89] found id: ""
	I0829 20:28:42.209281   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.209291   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:42.209299   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:42.209362   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:42.247944   67607 cri.go:89] found id: ""
	I0829 20:28:42.247970   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.247977   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:42.247983   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:42.248028   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:42.285613   67607 cri.go:89] found id: ""
	I0829 20:28:42.285644   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.285651   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:42.285657   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:42.285722   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:42.319826   67607 cri.go:89] found id: ""
	I0829 20:28:42.319851   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.319858   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:42.319864   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:42.319928   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:42.357150   67607 cri.go:89] found id: ""
	I0829 20:28:42.357173   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.357182   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:42.357189   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:42.357243   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:42.392150   67607 cri.go:89] found id: ""
	I0829 20:28:42.392170   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.392178   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:42.392185   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:42.392197   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:42.469240   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:42.469271   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:42.469286   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:42.549165   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:42.549198   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:42.591900   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:42.591930   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:42.642593   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:42.642625   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:40.851544   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:43.350420   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:39.708341   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:42.206888   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:44.207934   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:42.494791   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:44.992992   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:45.157092   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:45.170832   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:45.170916   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:45.207210   67607 cri.go:89] found id: ""
	I0829 20:28:45.207235   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.207244   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:45.207251   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:45.207308   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:45.245321   67607 cri.go:89] found id: ""
	I0829 20:28:45.245352   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.245362   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:45.245379   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:45.245448   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:45.280326   67607 cri.go:89] found id: ""
	I0829 20:28:45.280369   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.280381   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:45.280389   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:45.280451   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:45.318294   67607 cri.go:89] found id: ""
	I0829 20:28:45.318322   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.318333   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:45.318340   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:45.318411   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:45.352903   67607 cri.go:89] found id: ""
	I0829 20:28:45.352925   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.352932   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:45.352938   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:45.352990   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:45.389251   67607 cri.go:89] found id: ""
	I0829 20:28:45.389273   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.389280   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:45.389286   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:45.389340   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:45.424348   67607 cri.go:89] found id: ""
	I0829 20:28:45.424385   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.424397   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:45.424404   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:45.424453   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:45.459058   67607 cri.go:89] found id: ""
	I0829 20:28:45.459087   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.459098   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:45.459109   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:45.459124   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:45.510386   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:45.510423   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:45.524896   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:45.524923   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:45.593987   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:45.594064   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:45.594082   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:45.668738   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:45.668771   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:48.206497   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:48.219625   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:48.219696   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:48.254936   67607 cri.go:89] found id: ""
	I0829 20:28:48.254959   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.254966   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:48.254971   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:48.255018   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:48.290826   67607 cri.go:89] found id: ""
	I0829 20:28:48.290851   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.290859   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:48.290864   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:48.290910   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:48.327508   67607 cri.go:89] found id: ""
	I0829 20:28:48.327533   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.327540   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:48.327546   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:48.327593   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:48.364492   67607 cri.go:89] found id: ""
	I0829 20:28:48.364517   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.364525   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:48.364530   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:48.364580   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:48.400035   67607 cri.go:89] found id: ""
	I0829 20:28:48.400062   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.400072   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:48.400079   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:48.400144   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:48.433999   67607 cri.go:89] found id: ""
	I0829 20:28:48.434026   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.434035   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:48.434043   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:48.434104   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:48.468841   67607 cri.go:89] found id: ""
	I0829 20:28:48.468873   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.468889   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:48.468903   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:48.468971   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:48.506557   67607 cri.go:89] found id: ""
	I0829 20:28:48.506589   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.506598   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:48.506609   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:48.506624   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:48.577023   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:48.577044   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:48.577056   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:48.654372   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:48.654407   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:48.691125   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:48.691152   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:48.746383   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:48.746414   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:45.350581   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:47.351437   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:46.705575   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:48.707018   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:46.993532   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:48.994284   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:51.494177   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:51.260591   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:51.273911   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:51.273974   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:51.311517   67607 cri.go:89] found id: ""
	I0829 20:28:51.311545   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.311553   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:51.311567   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:51.311616   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:51.348220   67607 cri.go:89] found id: ""
	I0829 20:28:51.348247   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.348256   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:51.348264   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:51.348321   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:51.383560   67607 cri.go:89] found id: ""
	I0829 20:28:51.383599   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.383611   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:51.383619   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:51.383680   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:51.419241   67607 cri.go:89] found id: ""
	I0829 20:28:51.419268   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.419278   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:51.419286   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:51.419343   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:51.453954   67607 cri.go:89] found id: ""
	I0829 20:28:51.453979   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.453986   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:51.453992   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:51.454047   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:51.489457   67607 cri.go:89] found id: ""
	I0829 20:28:51.489480   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.489488   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:51.489493   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:51.489544   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:51.524072   67607 cri.go:89] found id: ""
	I0829 20:28:51.524100   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.524107   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:51.524113   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:51.524160   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:51.561238   67607 cri.go:89] found id: ""
	I0829 20:28:51.561263   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.561271   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:51.561279   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:51.561290   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:51.615422   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:51.615462   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:51.632180   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:51.632216   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:51.704335   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:51.704363   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:51.704378   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:51.794219   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:51.794260   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:49.852140   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:52.351142   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:51.205903   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:53.207651   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:53.495412   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:55.993489   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:54.342556   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:54.356325   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:54.356400   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:54.390928   67607 cri.go:89] found id: ""
	I0829 20:28:54.390952   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.390959   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:54.390965   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:54.391011   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:54.426970   67607 cri.go:89] found id: ""
	I0829 20:28:54.427002   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.427013   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:54.427020   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:54.427074   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:54.464121   67607 cri.go:89] found id: ""
	I0829 20:28:54.464155   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.464166   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:54.464174   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:54.464236   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:54.499790   67607 cri.go:89] found id: ""
	I0829 20:28:54.499816   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.499827   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:54.499840   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:54.499889   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:54.537212   67607 cri.go:89] found id: ""
	I0829 20:28:54.537239   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.537249   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:54.537256   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:54.537314   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:54.575370   67607 cri.go:89] found id: ""
	I0829 20:28:54.575399   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.575410   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:54.575417   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:54.575469   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:54.608403   67607 cri.go:89] found id: ""
	I0829 20:28:54.608432   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.608443   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:54.608453   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:54.608514   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:54.645259   67607 cri.go:89] found id: ""
	I0829 20:28:54.645285   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.645292   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:54.645300   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:54.645311   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:54.697022   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:54.697063   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:54.712873   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:54.712914   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:54.814253   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:54.814278   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:54.814295   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:54.896473   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:54.896507   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:57.441648   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:57.455245   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:57.455321   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:57.495365   67607 cri.go:89] found id: ""
	I0829 20:28:57.495397   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.495405   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:57.495411   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:57.495472   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:57.529555   67607 cri.go:89] found id: ""
	I0829 20:28:57.529582   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.529590   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:57.529597   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:57.529667   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:57.564168   67607 cri.go:89] found id: ""
	I0829 20:28:57.564196   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.564208   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:57.564215   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:57.564277   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:57.602057   67607 cri.go:89] found id: ""
	I0829 20:28:57.602089   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.602100   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:57.602108   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:57.602194   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:57.638195   67607 cri.go:89] found id: ""
	I0829 20:28:57.638226   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.638235   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:57.638244   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:57.638307   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:57.674556   67607 cri.go:89] found id: ""
	I0829 20:28:57.674605   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.674615   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:57.674623   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:57.674680   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:57.709256   67607 cri.go:89] found id: ""
	I0829 20:28:57.709282   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.709291   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:57.709298   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:57.709358   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:57.743629   67607 cri.go:89] found id: ""
	I0829 20:28:57.743652   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.743659   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:57.743668   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:57.743679   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:57.789067   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:57.789098   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:57.843372   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:57.843403   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:57.858630   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:57.858661   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:57.927776   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:57.927798   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:57.927814   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:54.850906   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:56.851300   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:55.208638   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:57.707756   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:57.994287   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:00.493343   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:00.508180   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:00.521451   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:00.521529   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:00.557912   67607 cri.go:89] found id: ""
	I0829 20:29:00.557938   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.557945   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:00.557951   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:00.557997   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:00.595186   67607 cri.go:89] found id: ""
	I0829 20:29:00.595215   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.595226   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:00.595237   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:00.595299   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:00.631553   67607 cri.go:89] found id: ""
	I0829 20:29:00.631581   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.631592   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:00.631600   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:00.631660   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:00.666502   67607 cri.go:89] found id: ""
	I0829 20:29:00.666525   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.666551   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:00.666560   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:00.666621   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:00.700797   67607 cri.go:89] found id: ""
	I0829 20:29:00.700824   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.700835   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:00.700842   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:00.700908   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:00.739957   67607 cri.go:89] found id: ""
	I0829 20:29:00.739976   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.739989   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:00.739994   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:00.740035   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:00.800704   67607 cri.go:89] found id: ""
	I0829 20:29:00.800740   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.800750   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:00.800757   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:00.800820   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:00.837678   67607 cri.go:89] found id: ""
	I0829 20:29:00.837704   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.837712   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:00.837720   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:00.837731   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:00.888359   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:00.888391   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:00.903074   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:00.903103   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:00.964865   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:00.964885   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:00.964898   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:01.049351   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:01.049387   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:03.589829   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:03.603120   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:03.603192   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:03.637647   67607 cri.go:89] found id: ""
	I0829 20:29:03.637672   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.637678   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:03.637684   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:03.637732   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:03.673807   67607 cri.go:89] found id: ""
	I0829 20:29:03.673842   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.673852   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:03.673860   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:03.673918   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:03.709490   67607 cri.go:89] found id: ""
	I0829 20:29:03.709516   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.709527   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:03.709533   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:03.709595   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:03.751662   67607 cri.go:89] found id: ""
	I0829 20:29:03.751688   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.751696   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:03.751702   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:03.751751   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:03.787861   67607 cri.go:89] found id: ""
	I0829 20:29:03.787896   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.787908   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:03.787917   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:03.787977   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:59.350888   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:01.850615   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:03.851438   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:00.207912   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:02.707309   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:02.493506   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:04.494305   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:03.824383   67607 cri.go:89] found id: ""
	I0829 20:29:03.824413   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.824431   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:03.824438   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:03.824499   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:03.863904   67607 cri.go:89] found id: ""
	I0829 20:29:03.863929   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.863937   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:03.863943   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:03.863990   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:03.902336   67607 cri.go:89] found id: ""
	I0829 20:29:03.902360   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.902368   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:03.902375   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:03.902386   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:03.951468   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:03.951499   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:03.965789   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:03.965816   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:04.035096   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:04.035119   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:04.035193   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:04.115842   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:04.115876   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:06.662652   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:06.676508   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:06.676583   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:06.713058   67607 cri.go:89] found id: ""
	I0829 20:29:06.713084   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.713093   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:06.713101   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:06.713171   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:06.747513   67607 cri.go:89] found id: ""
	I0829 20:29:06.747544   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.747552   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:06.747557   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:06.747617   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:06.782662   67607 cri.go:89] found id: ""
	I0829 20:29:06.782689   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.782695   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:06.782701   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:06.782758   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:06.818472   67607 cri.go:89] found id: ""
	I0829 20:29:06.818500   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.818510   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:06.818516   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:06.818586   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:06.852928   67607 cri.go:89] found id: ""
	I0829 20:29:06.852954   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.852964   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:06.852974   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:06.853032   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:06.893859   67607 cri.go:89] found id: ""
	I0829 20:29:06.893889   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.893899   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:06.893907   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:06.893969   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:06.931552   67607 cri.go:89] found id: ""
	I0829 20:29:06.931584   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.931594   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:06.931601   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:06.931662   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:06.967210   67607 cri.go:89] found id: ""
	I0829 20:29:06.967243   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.967254   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:06.967266   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:06.967279   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:07.020595   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:07.020631   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:07.034738   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:07.034764   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:07.103726   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:07.103747   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:07.103760   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:07.184727   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:07.184764   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:06.350610   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:08.351571   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:05.207055   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:07.207650   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:06.994653   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:09.493932   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:09.746639   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:09.761228   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:09.761308   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:09.802071   67607 cri.go:89] found id: ""
	I0829 20:29:09.802102   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.802113   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:09.802122   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:09.802180   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:09.837352   67607 cri.go:89] found id: ""
	I0829 20:29:09.837385   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.837395   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:09.837402   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:09.837464   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:09.874951   67607 cri.go:89] found id: ""
	I0829 20:29:09.874980   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.874992   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:09.874999   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:09.875055   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:09.909660   67607 cri.go:89] found id: ""
	I0829 20:29:09.909696   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.909706   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:09.909713   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:09.909777   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:09.949727   67607 cri.go:89] found id: ""
	I0829 20:29:09.949751   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.949759   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:09.949765   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:09.949825   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:09.984576   67607 cri.go:89] found id: ""
	I0829 20:29:09.984609   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.984617   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:09.984623   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:09.984675   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:10.022499   67607 cri.go:89] found id: ""
	I0829 20:29:10.022523   67607 logs.go:276] 0 containers: []
	W0829 20:29:10.022530   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:10.022553   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:10.022624   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:10.064308   67607 cri.go:89] found id: ""
	I0829 20:29:10.064346   67607 logs.go:276] 0 containers: []
	W0829 20:29:10.064356   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:10.064367   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:10.064382   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:10.113505   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:10.113537   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:10.127614   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:10.127640   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:10.200558   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:10.200579   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:10.200592   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:10.292984   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:10.293020   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:12.833100   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:12.846645   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:12.846712   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:12.885396   67607 cri.go:89] found id: ""
	I0829 20:29:12.885423   67607 logs.go:276] 0 containers: []
	W0829 20:29:12.885430   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:12.885436   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:12.885486   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:12.922556   67607 cri.go:89] found id: ""
	I0829 20:29:12.922584   67607 logs.go:276] 0 containers: []
	W0829 20:29:12.922595   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:12.922602   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:12.922688   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:12.965294   67607 cri.go:89] found id: ""
	I0829 20:29:12.965324   67607 logs.go:276] 0 containers: []
	W0829 20:29:12.965335   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:12.965342   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:12.965401   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:13.022911   67607 cri.go:89] found id: ""
	I0829 20:29:13.022934   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.022942   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:13.022948   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:13.023009   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:13.077009   67607 cri.go:89] found id: ""
	I0829 20:29:13.077035   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.077043   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:13.077048   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:13.077095   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:13.114202   67607 cri.go:89] found id: ""
	I0829 20:29:13.114233   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.114243   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:13.114251   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:13.114315   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:13.147025   67607 cri.go:89] found id: ""
	I0829 20:29:13.147049   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.147057   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:13.147063   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:13.147110   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:13.183112   67607 cri.go:89] found id: ""
	I0829 20:29:13.183138   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.183148   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:13.183159   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:13.183173   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:13.240558   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:13.240595   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:13.255563   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:13.255589   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:13.322826   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:13.322846   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:13.322857   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:13.399330   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:13.399365   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:10.850650   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:12.852188   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:09.706791   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:11.707397   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:13.708663   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:11.993311   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:13.994310   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:16.494854   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:15.938467   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:15.951742   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:15.951812   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:15.987492   67607 cri.go:89] found id: ""
	I0829 20:29:15.987517   67607 logs.go:276] 0 containers: []
	W0829 20:29:15.987524   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:15.987530   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:15.987575   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:16.024187   67607 cri.go:89] found id: ""
	I0829 20:29:16.024214   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.024223   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:16.024231   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:16.024291   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:16.058141   67607 cri.go:89] found id: ""
	I0829 20:29:16.058164   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.058171   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:16.058176   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:16.058225   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:16.092390   67607 cri.go:89] found id: ""
	I0829 20:29:16.092414   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.092421   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:16.092427   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:16.092472   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:16.130178   67607 cri.go:89] found id: ""
	I0829 20:29:16.130209   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.130219   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:16.130227   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:16.130289   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:16.163867   67607 cri.go:89] found id: ""
	I0829 20:29:16.163900   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.163907   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:16.163913   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:16.163964   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:16.197764   67607 cri.go:89] found id: ""
	I0829 20:29:16.197792   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.197798   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:16.197804   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:16.197850   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:16.233357   67607 cri.go:89] found id: ""
	I0829 20:29:16.233383   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.233393   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:16.233403   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:16.233418   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:16.285154   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:16.285188   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:16.299057   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:16.299085   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:16.377021   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:16.377041   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:16.377062   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:16.457750   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:16.457796   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:15.350415   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:17.850927   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:16.206841   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:18.207273   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:18.993478   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:21.493806   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:18.999133   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:19.016143   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:19.016223   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:19.049225   67607 cri.go:89] found id: ""
	I0829 20:29:19.049252   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.049259   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:19.049265   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:19.049317   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:19.085237   67607 cri.go:89] found id: ""
	I0829 20:29:19.085297   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.085314   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:19.085325   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:19.085389   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:19.123476   67607 cri.go:89] found id: ""
	I0829 20:29:19.123501   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.123509   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:19.123514   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:19.123571   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:19.159958   67607 cri.go:89] found id: ""
	I0829 20:29:19.159984   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.159993   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:19.160001   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:19.160055   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:19.192385   67607 cri.go:89] found id: ""
	I0829 20:29:19.192410   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.192418   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:19.192423   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:19.192483   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:19.230781   67607 cri.go:89] found id: ""
	I0829 20:29:19.230804   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.230811   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:19.230816   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:19.230868   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:19.264925   67607 cri.go:89] found id: ""
	I0829 20:29:19.264954   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.264964   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:19.264972   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:19.265032   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:19.302461   67607 cri.go:89] found id: ""
	I0829 20:29:19.302484   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.302491   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:19.302499   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:19.302510   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:19.384799   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:19.384833   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:19.425281   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:19.425313   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:19.477380   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:19.477412   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:19.492315   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:19.492350   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:19.563428   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:22.064407   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:22.078609   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:22.078670   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:22.112630   67607 cri.go:89] found id: ""
	I0829 20:29:22.112662   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.112672   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:22.112680   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:22.112741   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:22.149078   67607 cri.go:89] found id: ""
	I0829 20:29:22.149108   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.149117   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:22.149124   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:22.149186   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:22.184568   67607 cri.go:89] found id: ""
	I0829 20:29:22.184596   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.184605   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:22.184613   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:22.184682   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:22.220881   67607 cri.go:89] found id: ""
	I0829 20:29:22.220908   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.220919   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:22.220926   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:22.220987   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:22.256280   67607 cri.go:89] found id: ""
	I0829 20:29:22.256305   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.256314   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:22.256321   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:22.256386   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:22.294546   67607 cri.go:89] found id: ""
	I0829 20:29:22.294580   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.294590   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:22.294597   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:22.294660   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:22.332178   67607 cri.go:89] found id: ""
	I0829 20:29:22.332207   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.332215   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:22.332220   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:22.332266   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:22.368283   67607 cri.go:89] found id: ""
	I0829 20:29:22.368309   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.368317   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:22.368325   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:22.368336   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:22.421800   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:22.421836   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:22.435539   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:22.435565   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:22.504402   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:22.504427   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:22.504441   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:22.588293   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:22.588326   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:19.851801   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:22.351929   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:20.207342   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:22.707546   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:23.493994   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:25.993337   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:25.130766   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:25.144479   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:25.144554   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:25.181606   67607 cri.go:89] found id: ""
	I0829 20:29:25.181636   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.181643   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:25.181649   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:25.181697   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:25.220291   67607 cri.go:89] found id: ""
	I0829 20:29:25.220320   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.220328   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:25.220335   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:25.220447   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:25.260947   67607 cri.go:89] found id: ""
	I0829 20:29:25.260975   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.260983   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:25.260988   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:25.261035   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:25.298200   67607 cri.go:89] found id: ""
	I0829 20:29:25.298232   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.298243   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:25.298256   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:25.298314   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:25.333128   67607 cri.go:89] found id: ""
	I0829 20:29:25.333162   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.333174   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:25.333181   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:25.333232   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:25.368951   67607 cri.go:89] found id: ""
	I0829 20:29:25.368979   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.368989   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:25.368997   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:25.369052   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:25.403687   67607 cri.go:89] found id: ""
	I0829 20:29:25.403715   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.403726   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:25.403734   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:25.403799   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:25.442338   67607 cri.go:89] found id: ""
	I0829 20:29:25.442365   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.442372   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:25.442381   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:25.442395   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:25.456313   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:25.456335   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:25.528709   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:25.528730   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:25.528744   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:25.609976   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:25.610011   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:25.650044   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:25.650071   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:28.202683   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:28.216971   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:28.217046   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:28.256297   67607 cri.go:89] found id: ""
	I0829 20:29:28.256321   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.256329   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:28.256335   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:28.256379   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:28.289396   67607 cri.go:89] found id: ""
	I0829 20:29:28.289420   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.289427   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:28.289433   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:28.289484   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:28.323589   67607 cri.go:89] found id: ""
	I0829 20:29:28.323616   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.323623   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:28.323630   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:28.323676   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:28.362423   67607 cri.go:89] found id: ""
	I0829 20:29:28.362453   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.362463   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:28.362471   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:28.362531   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:28.396967   67607 cri.go:89] found id: ""
	I0829 20:29:28.396990   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.396998   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:28.397003   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:28.397053   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:28.430714   67607 cri.go:89] found id: ""
	I0829 20:29:28.430744   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.430755   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:28.430762   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:28.430831   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:28.468668   67607 cri.go:89] found id: ""
	I0829 20:29:28.468696   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.468707   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:28.468714   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:28.468777   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:28.506678   67607 cri.go:89] found id: ""
	I0829 20:29:28.506705   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.506716   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:28.506727   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:28.506741   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:28.545259   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:28.545287   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:28.598249   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:28.598285   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:28.612385   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:28.612429   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:28.685765   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:28.685792   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:28.685806   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:24.851688   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:27.350456   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:24.708523   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:27.206094   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:29.207859   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:27.995492   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:30.494340   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:31.270074   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:31.284357   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:31.284417   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:31.319530   67607 cri.go:89] found id: ""
	I0829 20:29:31.319558   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.319566   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:31.319571   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:31.319640   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:31.356826   67607 cri.go:89] found id: ""
	I0829 20:29:31.356856   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.356867   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:31.356880   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:31.356934   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:31.390137   67607 cri.go:89] found id: ""
	I0829 20:29:31.390160   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.390167   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:31.390173   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:31.390219   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:31.424939   67607 cri.go:89] found id: ""
	I0829 20:29:31.424972   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.424989   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:31.424997   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:31.425054   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:31.460896   67607 cri.go:89] found id: ""
	I0829 20:29:31.460921   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.460928   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:31.460935   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:31.460985   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:31.498933   67607 cri.go:89] found id: ""
	I0829 20:29:31.498957   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.498967   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:31.498975   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:31.499044   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:31.534953   67607 cri.go:89] found id: ""
	I0829 20:29:31.534985   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.534996   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:31.535003   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:31.535065   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:31.576248   67607 cri.go:89] found id: ""
	I0829 20:29:31.576273   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.576281   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:31.576291   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:31.576307   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:31.628157   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:31.628196   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:31.641564   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:31.641591   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:31.719949   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:31.719973   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:31.719996   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:31.795682   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:31.795716   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:29.351248   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:31.351424   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:33.851397   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:31.707552   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:34.207468   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:32.993432   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:34.993634   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:34.333468   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:34.347294   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:34.347370   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:34.384885   67607 cri.go:89] found id: ""
	I0829 20:29:34.384910   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.384921   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:34.384928   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:34.384991   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:34.422309   67607 cri.go:89] found id: ""
	I0829 20:29:34.422341   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.422351   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:34.422358   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:34.422417   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:34.459800   67607 cri.go:89] found id: ""
	I0829 20:29:34.459826   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.459834   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:34.459840   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:34.459905   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:34.495600   67607 cri.go:89] found id: ""
	I0829 20:29:34.495624   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.495633   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:34.495647   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:34.495708   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:34.531749   67607 cri.go:89] found id: ""
	I0829 20:29:34.531777   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.531788   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:34.531795   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:34.531856   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:34.571057   67607 cri.go:89] found id: ""
	I0829 20:29:34.571088   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.571098   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:34.571105   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:34.571168   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:34.609645   67607 cri.go:89] found id: ""
	I0829 20:29:34.609676   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.609687   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:34.609695   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:34.609753   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:34.647199   67607 cri.go:89] found id: ""
	I0829 20:29:34.647233   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.647244   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:34.647255   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:34.647269   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:34.661390   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:34.661420   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:34.737590   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:34.737613   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:34.737625   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:34.820682   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:34.820721   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:34.861697   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:34.861723   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:37.412384   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:37.426081   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:37.426162   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:37.461302   67607 cri.go:89] found id: ""
	I0829 20:29:37.461332   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.461342   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:37.461349   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:37.461416   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:37.500869   67607 cri.go:89] found id: ""
	I0829 20:29:37.500898   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.500908   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:37.500915   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:37.500970   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:37.536908   67607 cri.go:89] found id: ""
	I0829 20:29:37.536932   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.536942   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:37.536949   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:37.537010   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:37.571939   67607 cri.go:89] found id: ""
	I0829 20:29:37.571969   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.571979   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:37.571987   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:37.572048   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:37.607834   67607 cri.go:89] found id: ""
	I0829 20:29:37.607864   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.607883   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:37.607891   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:37.607952   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:37.643932   67607 cri.go:89] found id: ""
	I0829 20:29:37.643963   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.643971   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:37.643978   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:37.644037   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:37.678148   67607 cri.go:89] found id: ""
	I0829 20:29:37.678177   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.678188   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:37.678195   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:37.678257   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:37.713170   67607 cri.go:89] found id: ""
	I0829 20:29:37.713195   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.713209   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:37.713219   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:37.713233   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:37.752538   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:37.752567   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:37.802888   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:37.802923   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:37.816546   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:37.816585   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:37.891647   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:37.891667   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:37.891680   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:35.851668   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:38.351371   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:36.208220   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:38.708523   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:36.994441   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:39.493291   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:40.472354   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:40.486186   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:40.486252   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:40.520935   67607 cri.go:89] found id: ""
	I0829 20:29:40.520963   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.520971   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:40.520977   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:40.521037   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:40.561399   67607 cri.go:89] found id: ""
	I0829 20:29:40.561428   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.561440   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:40.561447   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:40.561514   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:40.601821   67607 cri.go:89] found id: ""
	I0829 20:29:40.601846   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.601855   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:40.601862   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:40.601918   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:40.636429   67607 cri.go:89] found id: ""
	I0829 20:29:40.636454   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.636462   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:40.636468   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:40.636525   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:40.670781   67607 cri.go:89] found id: ""
	I0829 20:29:40.670816   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.670828   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:40.670836   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:40.670912   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:40.706635   67607 cri.go:89] found id: ""
	I0829 20:29:40.706663   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.706674   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:40.706682   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:40.706739   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:40.741657   67607 cri.go:89] found id: ""
	I0829 20:29:40.741687   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.741695   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:40.741707   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:40.741770   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:40.777028   67607 cri.go:89] found id: ""
	I0829 20:29:40.777057   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.777066   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:40.777077   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:40.777093   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:40.829387   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:40.829424   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:40.843928   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:40.843956   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:40.917965   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:40.917992   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:40.918008   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:41.001880   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:41.001925   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:43.549007   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:43.563446   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:43.563502   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:43.598503   67607 cri.go:89] found id: ""
	I0829 20:29:43.598548   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.598557   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:43.598564   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:43.598614   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:43.634169   67607 cri.go:89] found id: ""
	I0829 20:29:43.634200   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.634210   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:43.634218   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:43.634280   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:43.670467   67607 cri.go:89] found id: ""
	I0829 20:29:43.670492   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.670500   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:43.670506   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:43.670580   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:43.706812   67607 cri.go:89] found id: ""
	I0829 20:29:43.706839   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.706849   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:43.706857   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:43.706922   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:43.741577   67607 cri.go:89] found id: ""
	I0829 20:29:43.741606   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.741612   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:43.741620   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:43.741700   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:43.776552   67607 cri.go:89] found id: ""
	I0829 20:29:43.776595   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.776625   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:43.776635   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:43.776701   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:43.816229   67607 cri.go:89] found id: ""
	I0829 20:29:43.816264   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.816274   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:43.816281   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:43.816346   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:40.850705   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:42.850904   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:40.709080   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:43.207700   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:41.994216   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:44.492986   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:46.494171   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:43.860726   67607 cri.go:89] found id: ""
	I0829 20:29:43.860753   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.860761   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:43.860768   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:43.860783   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:43.874311   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:43.874340   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:43.952243   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:43.952272   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:43.952288   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:44.032276   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:44.032312   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:44.075537   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:44.075571   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:46.632798   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:46.645878   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:46.645948   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:46.683682   67607 cri.go:89] found id: ""
	I0829 20:29:46.683711   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.683720   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:46.683726   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:46.683775   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:46.727985   67607 cri.go:89] found id: ""
	I0829 20:29:46.728012   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.728024   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:46.728031   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:46.728090   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:46.762142   67607 cri.go:89] found id: ""
	I0829 20:29:46.762166   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.762174   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:46.762180   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:46.762226   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:46.802423   67607 cri.go:89] found id: ""
	I0829 20:29:46.802453   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.802464   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:46.802471   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:46.802515   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:46.840382   67607 cri.go:89] found id: ""
	I0829 20:29:46.840411   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.840418   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:46.840425   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:46.840473   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:46.878438   67607 cri.go:89] found id: ""
	I0829 20:29:46.878466   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.878476   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:46.878483   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:46.878562   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:46.913589   67607 cri.go:89] found id: ""
	I0829 20:29:46.913618   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.913625   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:46.913631   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:46.913678   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:46.948894   67607 cri.go:89] found id: ""
	I0829 20:29:46.948922   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.948929   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:46.948938   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:46.948949   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:47.005709   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:47.005745   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:47.030316   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:47.030343   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:47.105899   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:47.105920   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:47.105932   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:47.189405   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:47.189442   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:45.352639   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:47.850647   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:45.709140   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:48.207411   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:48.994239   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:51.493287   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:49.727745   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:49.742061   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:49.742131   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:49.777428   67607 cri.go:89] found id: ""
	I0829 20:29:49.777456   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.777464   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:49.777471   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:49.777531   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:49.811611   67607 cri.go:89] found id: ""
	I0829 20:29:49.811639   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.811646   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:49.811653   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:49.811709   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:49.844962   67607 cri.go:89] found id: ""
	I0829 20:29:49.844987   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.844995   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:49.845006   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:49.845062   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:49.880259   67607 cri.go:89] found id: ""
	I0829 20:29:49.880286   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.880297   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:49.880305   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:49.880366   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:49.915889   67607 cri.go:89] found id: ""
	I0829 20:29:49.915918   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.915926   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:49.915932   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:49.915988   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:49.953146   67607 cri.go:89] found id: ""
	I0829 20:29:49.953174   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.953182   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:49.953189   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:49.953240   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:49.990689   67607 cri.go:89] found id: ""
	I0829 20:29:49.990721   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.990730   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:49.990738   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:49.990792   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:50.024775   67607 cri.go:89] found id: ""
	I0829 20:29:50.024806   67607 logs.go:276] 0 containers: []
	W0829 20:29:50.024817   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:50.024827   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:50.024842   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:50.079030   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:50.079064   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:50.093178   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:50.093205   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:50.171476   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:50.171499   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:50.171512   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:50.252913   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:50.252946   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:52.799818   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:52.812857   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:52.812930   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:52.850736   67607 cri.go:89] found id: ""
	I0829 20:29:52.850761   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.850770   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:52.850777   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:52.850834   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:52.888892   67607 cri.go:89] found id: ""
	I0829 20:29:52.888916   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.888923   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:52.888929   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:52.888975   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:52.925390   67607 cri.go:89] found id: ""
	I0829 20:29:52.925418   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.925428   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:52.925435   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:52.925501   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:52.960329   67607 cri.go:89] found id: ""
	I0829 20:29:52.960352   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.960360   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:52.960366   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:52.960413   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:52.994899   67607 cri.go:89] found id: ""
	I0829 20:29:52.994927   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.994935   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:52.994941   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:52.994995   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:53.033028   67607 cri.go:89] found id: ""
	I0829 20:29:53.033057   67607 logs.go:276] 0 containers: []
	W0829 20:29:53.033068   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:53.033076   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:53.033136   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:53.068353   67607 cri.go:89] found id: ""
	I0829 20:29:53.068381   67607 logs.go:276] 0 containers: []
	W0829 20:29:53.068389   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:53.068394   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:53.068441   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:53.104496   67607 cri.go:89] found id: ""
	I0829 20:29:53.104524   67607 logs.go:276] 0 containers: []
	W0829 20:29:53.104534   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:53.104545   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:53.104560   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:53.175777   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:53.175810   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:53.175827   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:53.257362   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:53.257396   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:53.295822   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:53.295850   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:53.351237   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:53.351263   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:49.851324   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:52.350768   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:50.707986   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:53.206918   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:53.494828   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:55.994443   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:55.864680   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:55.879324   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:55.879391   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:55.914454   67607 cri.go:89] found id: ""
	I0829 20:29:55.914479   67607 logs.go:276] 0 containers: []
	W0829 20:29:55.914490   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:55.914498   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:55.914592   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:55.953778   67607 cri.go:89] found id: ""
	I0829 20:29:55.953804   67607 logs.go:276] 0 containers: []
	W0829 20:29:55.953814   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:55.953821   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:55.953883   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:55.994659   67607 cri.go:89] found id: ""
	I0829 20:29:55.994681   67607 logs.go:276] 0 containers: []
	W0829 20:29:55.994689   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:55.994697   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:55.994768   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:56.031262   67607 cri.go:89] found id: ""
	I0829 20:29:56.031288   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.031299   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:56.031306   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:56.031366   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:56.063748   67607 cri.go:89] found id: ""
	I0829 20:29:56.063776   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.063785   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:56.063793   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:56.063883   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:56.098024   67607 cri.go:89] found id: ""
	I0829 20:29:56.098060   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.098068   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:56.098074   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:56.098127   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:56.141340   67607 cri.go:89] found id: ""
	I0829 20:29:56.141364   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.141374   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:56.141381   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:56.141440   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:56.176668   67607 cri.go:89] found id: ""
	I0829 20:29:56.176696   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.176707   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:56.176717   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:56.176731   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:56.216294   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:56.216322   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:56.269404   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:56.269440   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:56.283134   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:56.283160   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:56.355005   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:56.355023   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:56.355035   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:54.851658   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:57.350247   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:55.207477   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:57.708007   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:58.493689   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:00.998990   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:58.937406   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:58.950924   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:58.950981   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:58.986748   67607 cri.go:89] found id: ""
	I0829 20:29:58.986778   67607 logs.go:276] 0 containers: []
	W0829 20:29:58.986788   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:58.986795   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:58.986861   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:59.023737   67607 cri.go:89] found id: ""
	I0829 20:29:59.023763   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.023773   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:59.023780   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:59.023840   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:59.060245   67607 cri.go:89] found id: ""
	I0829 20:29:59.060274   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.060284   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:59.060291   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:59.060352   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:59.102467   67607 cri.go:89] found id: ""
	I0829 20:29:59.102493   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.102501   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:59.102507   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:59.102581   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:59.142601   67607 cri.go:89] found id: ""
	I0829 20:29:59.142625   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.142634   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:59.142647   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:59.142717   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:59.186683   67607 cri.go:89] found id: ""
	I0829 20:29:59.186707   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.186715   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:59.186723   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:59.186783   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:59.232104   67607 cri.go:89] found id: ""
	I0829 20:29:59.232136   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.232154   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:59.232162   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:59.232227   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:59.276416   67607 cri.go:89] found id: ""
	I0829 20:29:59.276442   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.276452   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:59.276462   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:59.276479   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:59.341741   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:59.341779   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:59.357312   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:59.357336   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:59.425653   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:59.425674   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:59.425689   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:59.505365   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:59.505403   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:02.049195   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:02.064558   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:02.064641   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:02.102141   67607 cri.go:89] found id: ""
	I0829 20:30:02.102188   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.102209   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:02.102217   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:02.102282   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:02.138610   67607 cri.go:89] found id: ""
	I0829 20:30:02.138640   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.138650   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:02.138658   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:02.138724   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:02.175391   67607 cri.go:89] found id: ""
	I0829 20:30:02.175423   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.175435   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:02.175442   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:02.175505   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:02.212956   67607 cri.go:89] found id: ""
	I0829 20:30:02.212981   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.212991   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:02.212998   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:02.213059   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:02.254444   67607 cri.go:89] found id: ""
	I0829 20:30:02.254467   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.254475   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:02.254481   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:02.254568   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:02.293232   67607 cri.go:89] found id: ""
	I0829 20:30:02.293260   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.293270   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:02.293277   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:02.293348   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:02.328300   67607 cri.go:89] found id: ""
	I0829 20:30:02.328329   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.328339   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:02.328346   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:02.328407   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:02.363467   67607 cri.go:89] found id: ""
	I0829 20:30:02.363495   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.363505   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:02.363514   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:02.363528   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:02.414357   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:02.414394   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:02.428229   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:02.428259   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:02.503640   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:02.503661   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:02.503674   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:02.584052   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:02.584087   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:59.352485   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:01.850334   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:59.717029   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:02.208354   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:03.494326   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:05.494833   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:05.124345   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:05.143530   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:05.143594   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:05.195985   67607 cri.go:89] found id: ""
	I0829 20:30:05.196014   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.196024   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:05.196032   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:05.196092   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:05.254315   67607 cri.go:89] found id: ""
	I0829 20:30:05.254343   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.254354   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:05.254362   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:05.254432   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:05.306756   67607 cri.go:89] found id: ""
	I0829 20:30:05.306781   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.306788   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:05.306794   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:05.306852   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:05.345200   67607 cri.go:89] found id: ""
	I0829 20:30:05.345225   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.345235   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:05.345242   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:05.345297   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:05.384038   67607 cri.go:89] found id: ""
	I0829 20:30:05.384064   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.384074   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:05.384081   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:05.384140   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:05.420177   67607 cri.go:89] found id: ""
	I0829 20:30:05.420201   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.420208   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:05.420214   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:05.420260   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:05.453492   67607 cri.go:89] found id: ""
	I0829 20:30:05.453513   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.453521   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:05.453526   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:05.453573   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:05.491591   67607 cri.go:89] found id: ""
	I0829 20:30:05.491618   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.491628   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:05.491638   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:05.491701   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:05.580458   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:05.580503   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:05.620137   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:05.620169   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:05.672137   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:05.672177   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:05.685946   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:05.685973   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:05.755176   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:08.256255   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:08.269099   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:08.269160   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:08.302552   67607 cri.go:89] found id: ""
	I0829 20:30:08.302578   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.302585   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:08.302591   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:08.302639   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:08.340683   67607 cri.go:89] found id: ""
	I0829 20:30:08.340711   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.340718   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:08.340726   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:08.340778   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:08.387389   67607 cri.go:89] found id: ""
	I0829 20:30:08.387416   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.387424   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:08.387430   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:08.387477   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:08.421303   67607 cri.go:89] found id: ""
	I0829 20:30:08.421330   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.421340   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:08.421348   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:08.421409   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:08.458648   67607 cri.go:89] found id: ""
	I0829 20:30:08.458677   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.458688   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:08.458695   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:08.458758   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:08.498748   67607 cri.go:89] found id: ""
	I0829 20:30:08.498776   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.498784   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:08.498790   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:08.498845   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:08.536859   67607 cri.go:89] found id: ""
	I0829 20:30:08.536889   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.536896   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:08.536902   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:08.536963   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:08.570685   67607 cri.go:89] found id: ""
	I0829 20:30:08.570713   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.570723   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:08.570734   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:08.570748   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:08.621904   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:08.621938   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:08.636367   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:08.636391   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:08.703796   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:08.703824   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:08.703838   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:08.785084   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:08.785120   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:04.350230   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:06.849598   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:08.850961   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:04.708012   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:07.206604   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:09.207368   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:07.993015   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:09.994043   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:11.326633   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:11.339570   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:11.339637   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:11.374132   67607 cri.go:89] found id: ""
	I0829 20:30:11.374155   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.374163   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:11.374169   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:11.374234   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:11.409004   67607 cri.go:89] found id: ""
	I0829 20:30:11.409036   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.409047   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:11.409054   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:11.409119   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:11.444598   67607 cri.go:89] found id: ""
	I0829 20:30:11.444625   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.444635   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:11.444643   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:11.444704   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:11.481912   67607 cri.go:89] found id: ""
	I0829 20:30:11.481942   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.481953   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:11.481961   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:11.482025   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:11.516436   67607 cri.go:89] found id: ""
	I0829 20:30:11.516466   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.516477   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:11.516483   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:11.516536   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:11.554762   67607 cri.go:89] found id: ""
	I0829 20:30:11.554787   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.554795   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:11.554801   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:11.554857   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:11.588902   67607 cri.go:89] found id: ""
	I0829 20:30:11.588931   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.588942   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:11.588950   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:11.589011   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:11.621346   67607 cri.go:89] found id: ""
	I0829 20:30:11.621368   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.621376   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:11.621383   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:11.621395   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:11.659671   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:11.659703   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:11.711288   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:11.711315   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:11.725285   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:11.725310   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:11.801713   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:11.801735   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:11.801750   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:10.851075   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:13.349510   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:11.208203   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:13.706599   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:12.494548   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:14.993188   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:14.382313   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:14.395852   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:14.395926   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:14.438735   67607 cri.go:89] found id: ""
	I0829 20:30:14.438762   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.438772   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:14.438778   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:14.438840   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:14.477886   67607 cri.go:89] found id: ""
	I0829 20:30:14.477928   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.477937   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:14.477943   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:14.478000   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:14.517627   67607 cri.go:89] found id: ""
	I0829 20:30:14.517654   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.517664   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:14.517670   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:14.517734   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:14.557247   67607 cri.go:89] found id: ""
	I0829 20:30:14.557272   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.557280   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:14.557286   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:14.557345   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:14.591364   67607 cri.go:89] found id: ""
	I0829 20:30:14.591388   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.591398   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:14.591406   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:14.591468   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:14.627517   67607 cri.go:89] found id: ""
	I0829 20:30:14.627539   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.627546   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:14.627551   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:14.627604   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:14.662388   67607 cri.go:89] found id: ""
	I0829 20:30:14.662409   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.662419   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:14.662432   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:14.662488   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:14.695277   67607 cri.go:89] found id: ""
	I0829 20:30:14.695307   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.695316   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:14.695324   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:14.695335   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:14.735824   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:14.735852   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:14.792607   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:14.792642   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:14.808881   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:14.808910   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:14.879804   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:14.879824   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:14.879837   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:17.459817   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:17.474813   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:17.474887   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:17.509885   67607 cri.go:89] found id: ""
	I0829 20:30:17.509913   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.509923   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:17.509930   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:17.509987   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:17.543931   67607 cri.go:89] found id: ""
	I0829 20:30:17.543959   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.543968   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:17.543973   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:17.544021   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:17.580944   67607 cri.go:89] found id: ""
	I0829 20:30:17.580972   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.580980   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:17.580986   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:17.581033   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:17.620061   67607 cri.go:89] found id: ""
	I0829 20:30:17.620088   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.620097   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:17.620103   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:17.620148   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:17.658675   67607 cri.go:89] found id: ""
	I0829 20:30:17.658706   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.658717   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:17.658724   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:17.658788   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:17.694424   67607 cri.go:89] found id: ""
	I0829 20:30:17.694453   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.694462   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:17.694467   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:17.694571   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:17.727425   67607 cri.go:89] found id: ""
	I0829 20:30:17.727450   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.727456   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:17.727462   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:17.727510   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:17.767915   67607 cri.go:89] found id: ""
	I0829 20:30:17.767946   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.767956   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:17.767965   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:17.767977   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:17.837556   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:17.837580   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:17.837593   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:17.921601   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:17.921638   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:17.960999   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:17.961026   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:18.013654   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:18.013691   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:15.351372   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:17.850896   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:16.206810   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:18.207702   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:16.993566   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:18.997786   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:21.493705   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:20.528244   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:20.542116   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:20.542190   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:20.578905   67607 cri.go:89] found id: ""
	I0829 20:30:20.578936   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.578947   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:20.578954   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:20.579003   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:20.613543   67607 cri.go:89] found id: ""
	I0829 20:30:20.613567   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.613574   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:20.613579   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:20.613627   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:20.649322   67607 cri.go:89] found id: ""
	I0829 20:30:20.649344   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.649352   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:20.649366   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:20.649429   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:20.684851   67607 cri.go:89] found id: ""
	I0829 20:30:20.684878   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.684886   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:20.684892   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:20.684950   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:20.722016   67607 cri.go:89] found id: ""
	I0829 20:30:20.722045   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.722054   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:20.722062   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:20.722125   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:20.757594   67607 cri.go:89] found id: ""
	I0829 20:30:20.757626   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.757637   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:20.757644   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:20.757707   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:20.793694   67607 cri.go:89] found id: ""
	I0829 20:30:20.793728   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.793738   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:20.793746   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:20.793812   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:20.829709   67607 cri.go:89] found id: ""
	I0829 20:30:20.829736   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.829747   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:20.829758   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:20.829782   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:20.888838   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:20.888888   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:20.903530   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:20.903556   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:20.972460   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:20.972488   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:20.972503   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:21.055556   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:21.055593   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:23.597355   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:23.611091   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:23.611162   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:23.649469   67607 cri.go:89] found id: ""
	I0829 20:30:23.649493   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.649501   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:23.649510   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:23.649562   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:23.684530   67607 cri.go:89] found id: ""
	I0829 20:30:23.684554   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.684561   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:23.684571   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:23.684625   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:23.720466   67607 cri.go:89] found id: ""
	I0829 20:30:23.720493   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.720503   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:23.720510   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:23.720563   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:23.755013   67607 cri.go:89] found id: ""
	I0829 20:30:23.755042   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.755053   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:23.755061   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:23.755127   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:23.795212   67607 cri.go:89] found id: ""
	I0829 20:30:23.795243   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.795254   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:23.795263   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:23.795320   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:20.349781   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:22.350157   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:20.707723   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:23.206214   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:23.994457   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:26.493771   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:23.832912   67607 cri.go:89] found id: ""
	I0829 20:30:23.832941   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.832951   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:23.832959   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:23.833015   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:23.869896   67607 cri.go:89] found id: ""
	I0829 20:30:23.869930   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.869939   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:23.869947   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:23.870011   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:23.908111   67607 cri.go:89] found id: ""
	I0829 20:30:23.908136   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.908145   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:23.908155   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:23.908170   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:23.988489   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:23.988510   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:23.988525   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:24.063246   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:24.063280   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:24.102943   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:24.102974   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:24.157255   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:24.157294   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:26.671966   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:26.684755   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:26.684830   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:26.721125   67607 cri.go:89] found id: ""
	I0829 20:30:26.721150   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.721158   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:26.721164   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:26.721219   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:26.756328   67607 cri.go:89] found id: ""
	I0829 20:30:26.756349   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.756356   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:26.756362   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:26.756420   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:26.791711   67607 cri.go:89] found id: ""
	I0829 20:30:26.791751   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.791763   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:26.791774   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:26.791857   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:26.827215   67607 cri.go:89] found id: ""
	I0829 20:30:26.827244   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.827254   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:26.827261   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:26.827321   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:26.863461   67607 cri.go:89] found id: ""
	I0829 20:30:26.863486   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.863497   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:26.863505   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:26.863569   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:26.900037   67607 cri.go:89] found id: ""
	I0829 20:30:26.900065   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.900075   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:26.900083   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:26.900139   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:26.937236   67607 cri.go:89] found id: ""
	I0829 20:30:26.937263   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.937274   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:26.937282   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:26.937340   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:26.970281   67607 cri.go:89] found id: ""
	I0829 20:30:26.970312   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.970322   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:26.970332   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:26.970345   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:27.041485   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:27.041511   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:27.041526   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:27.120774   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:27.120807   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:27.159656   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:27.159685   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:27.213322   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:27.213356   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:24.350464   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:26.351419   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:28.850079   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:25.207838   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:27.708107   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:28.993552   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:31.494259   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:29.729066   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:29.742044   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:29.742099   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:29.777426   67607 cri.go:89] found id: ""
	I0829 20:30:29.777454   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.777462   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:29.777468   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:29.777529   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:29.814353   67607 cri.go:89] found id: ""
	I0829 20:30:29.814381   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.814392   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:29.814401   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:29.814462   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:29.853754   67607 cri.go:89] found id: ""
	I0829 20:30:29.853783   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.853793   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:29.853801   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:29.853869   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:29.893966   67607 cri.go:89] found id: ""
	I0829 20:30:29.893991   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.893998   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:29.894003   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:29.894057   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:29.929452   67607 cri.go:89] found id: ""
	I0829 20:30:29.929483   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.929492   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:29.929502   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:29.929561   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:29.965880   67607 cri.go:89] found id: ""
	I0829 20:30:29.965906   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.965916   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:29.965924   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:29.965986   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:30.002192   67607 cri.go:89] found id: ""
	I0829 20:30:30.002226   67607 logs.go:276] 0 containers: []
	W0829 20:30:30.002237   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:30.002245   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:30.002320   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:30.037603   67607 cri.go:89] found id: ""
	I0829 20:30:30.037640   67607 logs.go:276] 0 containers: []
	W0829 20:30:30.037651   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:30.037662   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:30.037677   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:30.094128   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:30.094168   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:30.110667   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:30.110701   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:30.188355   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:30.188375   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:30.188388   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:30.270750   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:30.270785   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:32.809472   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:32.823099   67607 kubeadm.go:597] duration metric: took 4m3.15684598s to restartPrimaryControlPlane
	W0829 20:30:32.823188   67607 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 20:30:32.823224   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:30:33.322987   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:30:33.338134   67607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:30:33.348586   67607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:30:33.358672   67607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:30:33.358692   67607 kubeadm.go:157] found existing configuration files:
	
	I0829 20:30:33.358748   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:30:33.367955   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:30:33.368000   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:30:33.377565   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:30:33.386317   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:30:33.386377   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:30:33.396356   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:30:33.406228   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:30:33.406281   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:30:33.418323   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:30:33.427595   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:30:33.427657   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:30:33.437520   67607 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:30:33.511159   67607 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 20:30:33.511279   67607 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:30:33.669988   67607 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:30:33.670133   67607 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:30:33.670267   67607 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 20:30:33.859908   67607 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:30:30.850893   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:32.851574   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:30.207012   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:32.206405   66989 pod_ready.go:82] duration metric: took 4m0.005864609s for pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace to be "Ready" ...
	E0829 20:30:32.206426   66989 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0829 20:30:32.206433   66989 pod_ready.go:39] duration metric: took 4m5.570928284s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:30:32.206448   66989 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:30:32.206482   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:32.206528   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:32.260213   66989 cri.go:89] found id: "f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:32.260242   66989 cri.go:89] found id: ""
	I0829 20:30:32.260252   66989 logs.go:276] 1 containers: [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313]
	I0829 20:30:32.260314   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.265201   66989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:32.265276   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:32.307620   66989 cri.go:89] found id: "5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:32.307648   66989 cri.go:89] found id: ""
	I0829 20:30:32.307656   66989 logs.go:276] 1 containers: [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6]
	I0829 20:30:32.307701   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.312372   66989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:32.312430   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:32.350059   66989 cri.go:89] found id: "64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:32.350092   66989 cri.go:89] found id: ""
	I0829 20:30:32.350102   66989 logs.go:276] 1 containers: [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71]
	I0829 20:30:32.350158   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.354624   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:32.354681   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:32.393968   66989 cri.go:89] found id: "daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:32.393988   66989 cri.go:89] found id: ""
	I0829 20:30:32.393995   66989 logs.go:276] 1 containers: [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334]
	I0829 20:30:32.394039   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.398674   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:32.398745   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:32.433038   66989 cri.go:89] found id: "05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:32.433064   66989 cri.go:89] found id: ""
	I0829 20:30:32.433074   66989 logs.go:276] 1 containers: [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f]
	I0829 20:30:32.433118   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.436969   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:32.437028   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:32.472768   66989 cri.go:89] found id: "29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:32.472786   66989 cri.go:89] found id: ""
	I0829 20:30:32.472793   66989 logs.go:276] 1 containers: [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd]
	I0829 20:30:32.472842   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.477466   66989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:32.477536   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:32.514464   66989 cri.go:89] found id: ""
	I0829 20:30:32.514492   66989 logs.go:276] 0 containers: []
	W0829 20:30:32.514502   66989 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:32.514509   66989 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0829 20:30:32.514591   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0829 20:30:32.551429   66989 cri.go:89] found id: "668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:32.551452   66989 cri.go:89] found id: "585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:32.551456   66989 cri.go:89] found id: ""
	I0829 20:30:32.551463   66989 logs.go:276] 2 containers: [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523]
	I0829 20:30:32.551508   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.555697   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.559864   66989 logs.go:123] Gathering logs for kube-apiserver [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313] ...
	I0829 20:30:32.559883   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:32.609776   66989 logs.go:123] Gathering logs for coredns [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71] ...
	I0829 20:30:32.609803   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:32.648419   66989 logs.go:123] Gathering logs for kube-scheduler [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334] ...
	I0829 20:30:32.648446   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:32.685938   66989 logs.go:123] Gathering logs for storage-provisioner [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c] ...
	I0829 20:30:32.685969   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:32.728665   66989 logs.go:123] Gathering logs for container status ...
	I0829 20:30:32.728693   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:32.770030   66989 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:32.770068   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 20:30:32.907821   66989 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:32.907850   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:32.923119   66989 logs.go:123] Gathering logs for etcd [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6] ...
	I0829 20:30:32.923149   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:32.979819   66989 logs.go:123] Gathering logs for kube-proxy [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f] ...
	I0829 20:30:32.979853   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:33.020472   66989 logs.go:123] Gathering logs for kube-controller-manager [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd] ...
	I0829 20:30:33.020496   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:33.074802   66989 logs.go:123] Gathering logs for storage-provisioner [585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523] ...
	I0829 20:30:33.074838   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:33.112043   66989 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:33.112072   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:33.624274   66989 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:33.624316   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:33.861742   67607 out.go:235]   - Generating certificates and keys ...
	I0829 20:30:33.861849   67607 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:30:33.861946   67607 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:30:33.862075   67607 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:30:33.862174   67607 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:30:33.862276   67607 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:30:33.862366   67607 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:30:33.862467   67607 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:30:33.862573   67607 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:30:33.862794   67607 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:30:33.863226   67607 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:30:33.863323   67607 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:30:33.863417   67607 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:30:34.065914   67607 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:30:34.235581   67607 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:30:34.660452   67607 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:30:34.724718   67607 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:30:34.743897   67607 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:30:34.746263   67607 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:30:34.746369   67607 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:30:34.893824   67607 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:30:33.494825   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:35.994300   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:34.895805   67607 out.go:235]   - Booting up control plane ...
	I0829 20:30:34.895941   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:30:34.904294   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:30:34.915103   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:30:34.915744   67607 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:30:34.917923   67607 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 20:30:35.351975   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:37.352013   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:36.202184   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:36.218838   66989 api_server.go:72] duration metric: took 4m17.334186395s to wait for apiserver process to appear ...
	I0829 20:30:36.218870   66989 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:30:36.218910   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:36.218963   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:36.263205   66989 cri.go:89] found id: "f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:36.263233   66989 cri.go:89] found id: ""
	I0829 20:30:36.263243   66989 logs.go:276] 1 containers: [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313]
	I0829 20:30:36.263292   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.267466   66989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:36.267522   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:36.303894   66989 cri.go:89] found id: "5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:36.303930   66989 cri.go:89] found id: ""
	I0829 20:30:36.303938   66989 logs.go:276] 1 containers: [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6]
	I0829 20:30:36.303996   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.308089   66989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:36.308170   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:36.347320   66989 cri.go:89] found id: "64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:36.347392   66989 cri.go:89] found id: ""
	I0829 20:30:36.347414   66989 logs.go:276] 1 containers: [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71]
	I0829 20:30:36.347485   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.352121   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:36.352174   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:36.389760   66989 cri.go:89] found id: "daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:36.389784   66989 cri.go:89] found id: ""
	I0829 20:30:36.389793   66989 logs.go:276] 1 containers: [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334]
	I0829 20:30:36.389853   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.394860   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:36.394919   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:36.430562   66989 cri.go:89] found id: "05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:36.430587   66989 cri.go:89] found id: ""
	I0829 20:30:36.430597   66989 logs.go:276] 1 containers: [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f]
	I0829 20:30:36.430655   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.435151   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:36.435226   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:36.470714   66989 cri.go:89] found id: "29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:36.470742   66989 cri.go:89] found id: ""
	I0829 20:30:36.470750   66989 logs.go:276] 1 containers: [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd]
	I0829 20:30:36.470816   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.475382   66989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:36.475446   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:36.514853   66989 cri.go:89] found id: ""
	I0829 20:30:36.514888   66989 logs.go:276] 0 containers: []
	W0829 20:30:36.514898   66989 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:36.514910   66989 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0829 20:30:36.514971   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0829 20:30:36.548229   66989 cri.go:89] found id: "668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:36.548252   66989 cri.go:89] found id: "585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:36.548256   66989 cri.go:89] found id: ""
	I0829 20:30:36.548263   66989 logs.go:276] 2 containers: [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523]
	I0829 20:30:36.548314   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.552484   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.556661   66989 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:36.556681   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:36.622985   66989 logs.go:123] Gathering logs for etcd [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6] ...
	I0829 20:30:36.623019   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:36.678770   66989 logs.go:123] Gathering logs for kube-controller-manager [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd] ...
	I0829 20:30:36.678799   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:36.731822   66989 logs.go:123] Gathering logs for storage-provisioner [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c] ...
	I0829 20:30:36.731849   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:36.768451   66989 logs.go:123] Gathering logs for storage-provisioner [585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523] ...
	I0829 20:30:36.768482   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:36.803818   66989 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:36.803846   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:37.225805   66989 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:37.225849   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:37.245421   66989 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:37.245458   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 20:30:37.358238   66989 logs.go:123] Gathering logs for kube-apiserver [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313] ...
	I0829 20:30:37.358266   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:37.401876   66989 logs.go:123] Gathering logs for coredns [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71] ...
	I0829 20:30:37.401913   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:37.438189   66989 logs.go:123] Gathering logs for kube-scheduler [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334] ...
	I0829 20:30:37.438223   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:37.475404   66989 logs.go:123] Gathering logs for kube-proxy [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f] ...
	I0829 20:30:37.475433   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:37.511876   66989 logs.go:123] Gathering logs for container status ...
	I0829 20:30:37.511903   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:38.493604   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:40.494396   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:40.054097   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:30:40.058474   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0829 20:30:40.059830   66989 api_server.go:141] control plane version: v1.31.0
	I0829 20:30:40.059850   66989 api_server.go:131] duration metric: took 3.840972907s to wait for apiserver health ...
	I0829 20:30:40.059857   66989 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:30:40.059877   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:40.059924   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:40.101978   66989 cri.go:89] found id: "f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:40.102003   66989 cri.go:89] found id: ""
	I0829 20:30:40.102013   66989 logs.go:276] 1 containers: [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313]
	I0829 20:30:40.102073   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.107429   66989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:40.107496   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:40.145052   66989 cri.go:89] found id: "5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:40.145078   66989 cri.go:89] found id: ""
	I0829 20:30:40.145086   66989 logs.go:276] 1 containers: [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6]
	I0829 20:30:40.145133   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.149329   66989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:40.149394   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:40.187740   66989 cri.go:89] found id: "64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:40.187769   66989 cri.go:89] found id: ""
	I0829 20:30:40.187778   66989 logs.go:276] 1 containers: [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71]
	I0829 20:30:40.187838   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.192085   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:40.192156   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:40.231992   66989 cri.go:89] found id: "daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:40.232010   66989 cri.go:89] found id: ""
	I0829 20:30:40.232017   66989 logs.go:276] 1 containers: [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334]
	I0829 20:30:40.232060   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.236275   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:40.236333   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:40.279637   66989 cri.go:89] found id: "05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:40.279660   66989 cri.go:89] found id: ""
	I0829 20:30:40.279669   66989 logs.go:276] 1 containers: [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f]
	I0829 20:30:40.279727   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.288800   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:40.288876   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:40.341222   66989 cri.go:89] found id: "29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:40.341248   66989 cri.go:89] found id: ""
	I0829 20:30:40.341258   66989 logs.go:276] 1 containers: [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd]
	I0829 20:30:40.341322   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.346013   66989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:40.346088   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:40.383801   66989 cri.go:89] found id: ""
	I0829 20:30:40.383828   66989 logs.go:276] 0 containers: []
	W0829 20:30:40.383836   66989 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:40.383842   66989 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0829 20:30:40.383896   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0829 20:30:40.421847   66989 cri.go:89] found id: "668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:40.421874   66989 cri.go:89] found id: "585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:40.421879   66989 cri.go:89] found id: ""
	I0829 20:30:40.421889   66989 logs.go:276] 2 containers: [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523]
	I0829 20:30:40.421950   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.426229   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.429902   66989 logs.go:123] Gathering logs for storage-provisioner [585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523] ...
	I0829 20:30:40.429931   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:40.471015   66989 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:40.471039   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:40.831575   66989 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:40.831612   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:40.846195   66989 logs.go:123] Gathering logs for etcd [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6] ...
	I0829 20:30:40.846230   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:40.905469   66989 logs.go:123] Gathering logs for kube-scheduler [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334] ...
	I0829 20:30:40.905507   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:40.952303   66989 logs.go:123] Gathering logs for kube-proxy [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f] ...
	I0829 20:30:40.952337   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:41.001278   66989 logs.go:123] Gathering logs for kube-controller-manager [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd] ...
	I0829 20:30:41.001309   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:41.071045   66989 logs.go:123] Gathering logs for container status ...
	I0829 20:30:41.071089   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:41.120024   66989 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:41.120050   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:41.191412   66989 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:41.191445   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 20:30:41.321848   66989 logs.go:123] Gathering logs for kube-apiserver [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313] ...
	I0829 20:30:41.321874   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:41.370807   66989 logs.go:123] Gathering logs for coredns [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71] ...
	I0829 20:30:41.370833   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:41.405913   66989 logs.go:123] Gathering logs for storage-provisioner [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c] ...
	I0829 20:30:41.405939   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:43.948957   66989 system_pods.go:59] 8 kube-system pods found
	I0829 20:30:43.948987   66989 system_pods.go:61] "coredns-6f6b679f8f-dg6t6" [92e89b20-ebf4-4738-8ca7-9dc2a0e5653a] Running
	I0829 20:30:43.948992   66989 system_pods.go:61] "etcd-embed-certs-388383" [a688325a-9ed2-488d-a1a1-aa440e37fa9f] Running
	I0829 20:30:43.948996   66989 system_pods.go:61] "kube-apiserver-embed-certs-388383" [7a1b715b-87a3-44e0-868d-a3184f5b9f61] Running
	I0829 20:30:43.948999   66989 system_pods.go:61] "kube-controller-manager-embed-certs-388383" [9d942083-4d39-448c-8151-424ea9d5e6af] Running
	I0829 20:30:43.949003   66989 system_pods.go:61] "kube-proxy-fcxs4" [649b40c8-4f4b-40d1-8179-baf378d4c7d7] Running
	I0829 20:30:43.949006   66989 system_pods.go:61] "kube-scheduler-embed-certs-388383" [87b73013-dfad-411d-aaa9-f2c0e39fb920] Running
	I0829 20:30:43.949011   66989 system_pods.go:61] "metrics-server-6867b74b74-mx5jh" [99e21acd-b7b8-4e6f-8c75-c112206aed89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:30:43.949015   66989 system_pods.go:61] "storage-provisioner" [021ca156-b7a8-4647-8efe-db17968fd5a8] Running
	I0829 20:30:43.949022   66989 system_pods.go:74] duration metric: took 3.889159839s to wait for pod list to return data ...
	I0829 20:30:43.949028   66989 default_sa.go:34] waiting for default service account to be created ...
	I0829 20:30:43.951906   66989 default_sa.go:45] found service account: "default"
	I0829 20:30:43.951932   66989 default_sa.go:55] duration metric: took 2.897769ms for default service account to be created ...
	I0829 20:30:43.951943   66989 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 20:30:43.959246   66989 system_pods.go:86] 8 kube-system pods found
	I0829 20:30:43.959269   66989 system_pods.go:89] "coredns-6f6b679f8f-dg6t6" [92e89b20-ebf4-4738-8ca7-9dc2a0e5653a] Running
	I0829 20:30:43.959275   66989 system_pods.go:89] "etcd-embed-certs-388383" [a688325a-9ed2-488d-a1a1-aa440e37fa9f] Running
	I0829 20:30:43.959279   66989 system_pods.go:89] "kube-apiserver-embed-certs-388383" [7a1b715b-87a3-44e0-868d-a3184f5b9f61] Running
	I0829 20:30:43.959283   66989 system_pods.go:89] "kube-controller-manager-embed-certs-388383" [9d942083-4d39-448c-8151-424ea9d5e6af] Running
	I0829 20:30:43.959286   66989 system_pods.go:89] "kube-proxy-fcxs4" [649b40c8-4f4b-40d1-8179-baf378d4c7d7] Running
	I0829 20:30:43.959290   66989 system_pods.go:89] "kube-scheduler-embed-certs-388383" [87b73013-dfad-411d-aaa9-f2c0e39fb920] Running
	I0829 20:30:43.959296   66989 system_pods.go:89] "metrics-server-6867b74b74-mx5jh" [99e21acd-b7b8-4e6f-8c75-c112206aed89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:30:43.959302   66989 system_pods.go:89] "storage-provisioner" [021ca156-b7a8-4647-8efe-db17968fd5a8] Running
	I0829 20:30:43.959309   66989 system_pods.go:126] duration metric: took 7.361244ms to wait for k8s-apps to be running ...
	I0829 20:30:43.959318   66989 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 20:30:43.959356   66989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:30:43.976136   66989 system_svc.go:56] duration metric: took 16.811475ms WaitForService to wait for kubelet
	I0829 20:30:43.976167   66989 kubeadm.go:582] duration metric: took 4m25.091518378s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:30:43.976193   66989 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:30:43.979345   66989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:30:43.979376   66989 node_conditions.go:123] node cpu capacity is 2
	I0829 20:30:43.979386   66989 node_conditions.go:105] duration metric: took 3.187489ms to run NodePressure ...
	I0829 20:30:43.979396   66989 start.go:241] waiting for startup goroutines ...
	I0829 20:30:43.979402   66989 start.go:246] waiting for cluster config update ...
	I0829 20:30:43.979414   66989 start.go:255] writing updated cluster config ...
	I0829 20:30:43.979729   66989 ssh_runner.go:195] Run: rm -f paused
	I0829 20:30:44.028715   66989 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 20:30:44.030675   66989 out.go:177] * Done! kubectl is now configured to use "embed-certs-388383" cluster and "default" namespace by default
	I0829 20:30:39.850811   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:41.850941   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:42.993711   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:45.492729   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:44.351171   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:46.849842   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:48.851125   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:47.494031   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:49.993291   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:51.350926   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:53.850966   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:52.494604   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:54.994054   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:56.350237   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:58.856068   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:56.994483   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:59.494879   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:01.351293   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:03.850415   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:01.994470   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:04.493393   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:05.851663   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:08.350513   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:06.988349   68084 pod_ready.go:82] duration metric: took 4m0.000994859s for pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace to be "Ready" ...
	E0829 20:31:06.988378   68084 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 20:31:06.988396   68084 pod_ready.go:39] duration metric: took 4m13.5587561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:31:06.988421   68084 kubeadm.go:597] duration metric: took 4m20.63419422s to restartPrimaryControlPlane
	W0829 20:31:06.988470   68084 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 20:31:06.988492   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:31:10.350782   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:12.851120   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:14.919490   67607 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 20:31:14.920124   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:14.920395   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:15.350794   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:17.351675   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:19.920740   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:19.920993   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:19.858714   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:22.351208   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:24.851679   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:27.351087   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:33.177614   68084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.189095849s)
	I0829 20:31:33.177712   68084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:31:33.202840   68084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:31:33.220648   68084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:31:33.239458   68084 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:31:33.239479   68084 kubeadm.go:157] found existing configuration files:
	
	I0829 20:31:33.239519   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0829 20:31:33.257831   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:31:33.257900   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:31:33.272621   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0829 20:31:33.287906   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:31:33.287975   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:31:33.302931   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0829 20:31:33.312359   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:31:33.312411   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:31:33.322850   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0829 20:31:33.332224   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:31:33.332280   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:31:33.342072   68084 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:31:33.388790   68084 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 20:31:33.388844   68084 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:31:33.506108   68084 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:31:33.506263   68084 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:31:33.506403   68084 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 20:31:33.515467   68084 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:31:29.921355   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:29.921591   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:29.351212   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:31.351683   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:33.850337   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:33.517487   68084 out.go:235]   - Generating certificates and keys ...
	I0829 20:31:33.517590   68084 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:31:33.517697   68084 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:31:33.517809   68084 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:31:33.517907   68084 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:31:33.518009   68084 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:31:33.518086   68084 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:31:33.518174   68084 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:31:33.518266   68084 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:31:33.518379   68084 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:31:33.518495   68084 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:31:33.518567   68084 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:31:33.518656   68084 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:31:33.888310   68084 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:31:34.000803   68084 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 20:31:34.103016   68084 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:31:34.461677   68084 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:31:34.617814   68084 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:31:34.618316   68084 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:31:34.622440   68084 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:31:34.624324   68084 out.go:235]   - Booting up control plane ...
	I0829 20:31:34.624428   68084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:31:34.624527   68084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:31:34.624882   68084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:31:34.647388   68084 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:31:34.653776   68084 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:31:34.653864   68084 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:31:34.795338   68084 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 20:31:34.795463   68084 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 20:31:35.797126   68084 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001854627s
	I0829 20:31:35.797253   68084 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 20:31:35.852495   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:37.344608   66841 pod_ready.go:82] duration metric: took 4m0.000461851s for pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace to be "Ready" ...
	E0829 20:31:37.344637   66841 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 20:31:37.344661   66841 pod_ready.go:39] duration metric: took 4m13.033970527s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:31:37.344693   66841 kubeadm.go:597] duration metric: took 4m20.095743839s to restartPrimaryControlPlane
	W0829 20:31:37.344752   66841 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 20:31:37.344780   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:31:40.799092   68084 kubeadm.go:310] [api-check] The API server is healthy after 5.002121632s
	I0829 20:31:40.813865   68084 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 20:31:40.829677   68084 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 20:31:40.870324   68084 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 20:31:40.870598   68084 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-145096 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 20:31:40.889024   68084 kubeadm.go:310] [bootstrap-token] Using token: gy9sl5.6oyya9sd2gbep67e
	I0829 20:31:40.890947   68084 out.go:235]   - Configuring RBAC rules ...
	I0829 20:31:40.891083   68084 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 20:31:40.898748   68084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 20:31:40.912914   68084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 20:31:40.916739   68084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 20:31:40.923995   68084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 20:31:40.930447   68084 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 20:31:41.206632   68084 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 20:31:41.679673   68084 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 20:31:42.206707   68084 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 20:31:42.206733   68084 kubeadm.go:310] 
	I0829 20:31:42.206819   68084 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 20:31:42.206830   68084 kubeadm.go:310] 
	I0829 20:31:42.206974   68084 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 20:31:42.206996   68084 kubeadm.go:310] 
	I0829 20:31:42.207018   68084 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 20:31:42.207073   68084 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 20:31:42.207120   68084 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 20:31:42.207127   68084 kubeadm.go:310] 
	I0829 20:31:42.207189   68084 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 20:31:42.207196   68084 kubeadm.go:310] 
	I0829 20:31:42.207234   68084 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 20:31:42.207238   68084 kubeadm.go:310] 
	I0829 20:31:42.207285   68084 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 20:31:42.207382   68084 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 20:31:42.207473   68084 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 20:31:42.207484   68084 kubeadm.go:310] 
	I0829 20:31:42.207611   68084 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 20:31:42.207727   68084 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 20:31:42.207736   68084 kubeadm.go:310] 
	I0829 20:31:42.207854   68084 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token gy9sl5.6oyya9sd2gbep67e \
	I0829 20:31:42.207962   68084 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef \
	I0829 20:31:42.207983   68084 kubeadm.go:310] 	--control-plane 
	I0829 20:31:42.207986   68084 kubeadm.go:310] 
	I0829 20:31:42.208087   68084 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 20:31:42.208106   68084 kubeadm.go:310] 
	I0829 20:31:42.208214   68084 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token gy9sl5.6oyya9sd2gbep67e \
	I0829 20:31:42.208342   68084 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef 
	I0829 20:31:42.209248   68084 kubeadm.go:310] W0829 20:31:33.349141    2513 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 20:31:42.209595   68084 kubeadm.go:310] W0829 20:31:33.349919    2513 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 20:31:42.209769   68084 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:31:42.209803   68084 cni.go:84] Creating CNI manager for ""
	I0829 20:31:42.209817   68084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:31:42.211545   68084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:31:42.212889   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:31:42.223984   68084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:31:42.242703   68084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 20:31:42.242779   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-145096 minikube.k8s.io/updated_at=2024_08_29T20_31_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033 minikube.k8s.io/name=default-k8s-diff-port-145096 minikube.k8s.io/primary=true
	I0829 20:31:42.242779   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:42.448824   68084 ops.go:34] apiserver oom_adj: -16
	I0829 20:31:42.453004   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:42.953891   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:43.453922   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:43.953465   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:44.453647   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:44.954035   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:45.453660   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:45.953536   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:46.046900   68084 kubeadm.go:1113] duration metric: took 3.804195127s to wait for elevateKubeSystemPrivileges
	I0829 20:31:46.046927   68084 kubeadm.go:394] duration metric: took 4m59.74590678s to StartCluster
	I0829 20:31:46.046947   68084 settings.go:142] acquiring lock: {Name:mka4cd5ddff5796cd0ca11509c181178f4f73529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:31:46.047046   68084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:31:46.048617   68084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:31:46.048876   68084 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 20:31:46.048979   68084 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 20:31:46.049063   68084 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-145096"
	I0829 20:31:46.049090   68084 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-145096"
	I0829 20:31:46.049090   68084 config.go:182] Loaded profile config "default-k8s-diff-port-145096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:31:46.049099   68084 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-145096"
	I0829 20:31:46.049136   68084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-145096"
	W0829 20:31:46.049143   68084 addons.go:243] addon storage-provisioner should already be in state true
	I0829 20:31:46.049174   68084 host.go:66] Checking if "default-k8s-diff-port-145096" exists ...
	I0829 20:31:46.049104   68084 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-145096"
	I0829 20:31:46.049264   68084 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-145096"
	W0829 20:31:46.049280   68084 addons.go:243] addon metrics-server should already be in state true
	I0829 20:31:46.049335   68084 host.go:66] Checking if "default-k8s-diff-port-145096" exists ...
	I0829 20:31:46.049569   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.049574   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.049595   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.049599   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.049698   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.049722   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.050441   68084 out.go:177] * Verifying Kubernetes components...
	I0829 20:31:46.052039   68084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:31:46.065735   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39367
	I0829 20:31:46.065909   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32931
	I0829 20:31:46.066241   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.066344   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.066900   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.066918   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.067024   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.067045   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.067438   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.067481   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.067665   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:31:46.067902   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.067931   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.069157   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41005
	I0829 20:31:46.070637   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.070757   68084 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-145096"
	W0829 20:31:46.070771   68084 addons.go:243] addon default-storageclass should already be in state true
	I0829 20:31:46.070803   68084 host.go:66] Checking if "default-k8s-diff-port-145096" exists ...
	I0829 20:31:46.071118   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.071124   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.071132   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.071155   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.071510   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.072052   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.072095   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.085524   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39387
	I0829 20:31:46.085987   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.086553   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.086576   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.086966   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.087138   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:31:46.087202   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43235
	I0829 20:31:46.087621   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.088358   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.088381   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.088708   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.088806   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:31:46.089193   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.089363   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.090878   68084 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:31:46.091571   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42413
	I0829 20:31:46.092208   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.092291   68084 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:31:46.092316   68084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 20:31:46.092337   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:31:46.092660   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.092687   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.093044   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.093230   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:31:46.095184   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:31:46.096265   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.096792   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:31:46.096821   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.097088   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:31:46.097274   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:31:46.097433   68084 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 20:31:46.097448   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:31:46.097645   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:31:46.098681   68084 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 20:31:46.098697   68084 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 20:31:46.098715   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:31:46.101604   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.101993   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:31:46.102014   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.102328   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:31:46.102529   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:31:46.102687   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:31:46.102847   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:31:46.108154   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32805
	I0829 20:31:46.108627   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.109111   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.109129   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.109446   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.109675   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:31:46.111174   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:31:46.111440   68084 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 20:31:46.111452   68084 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 20:31:46.111469   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:31:46.114302   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.114805   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:31:46.114832   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.114921   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:31:46.115102   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:31:46.115256   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:31:46.115400   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:31:46.277748   68084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:31:46.297001   68084 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-145096" to be "Ready" ...
	I0829 20:31:46.317473   68084 node_ready.go:49] node "default-k8s-diff-port-145096" has status "Ready":"True"
	I0829 20:31:46.317498   68084 node_ready.go:38] duration metric: took 20.469679ms for node "default-k8s-diff-port-145096" to be "Ready" ...
	I0829 20:31:46.317509   68084 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:31:46.332180   68084 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:46.393588   68084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:31:46.399404   68084 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 20:31:46.399428   68084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 20:31:46.453014   68084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 20:31:46.460100   68084 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 20:31:46.460126   68084 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 20:31:46.541980   68084 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:31:46.542002   68084 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 20:31:46.607148   68084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:31:47.296344   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.296370   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.296445   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.296471   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.296678   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.296722   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.296744   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.296764   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.298376   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:47.298379   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.298404   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.298412   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:47.298420   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.298436   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.298453   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.298464   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.298700   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.298726   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:47.298729   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.318720   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.318745   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.319031   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:47.319053   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.319069   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.870171   68084 pod_ready.go:93] pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:31:47.870198   68084 pod_ready.go:82] duration metric: took 1.537994965s for pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:47.870208   68084 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:48.057308   68084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.450120563s)
	I0829 20:31:48.057358   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:48.057371   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:48.057667   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:48.057722   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:48.057734   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:48.057747   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:48.057759   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:48.057989   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:48.058005   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:48.058021   68084 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-145096"
	I0829 20:31:48.059886   68084 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0829 20:31:48.061124   68084 addons.go:510] duration metric: took 2.012141801s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0829 20:31:48.875874   68084 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:31:48.875897   68084 pod_ready.go:82] duration metric: took 1.005682325s for pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:48.875912   68084 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:48.879828   68084 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:31:48.879846   68084 pod_ready.go:82] duration metric: took 3.928263ms for pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:48.879863   68084 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:50.886764   68084 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:49.922318   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:49.922554   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:52.887708   68084 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:55.387571   68084 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:55.886194   68084 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:31:55.886217   68084 pod_ready.go:82] duration metric: took 7.006347256s for pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:55.886225   68084 pod_ready.go:39] duration metric: took 9.568704494s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:31:55.886238   68084 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:31:55.886286   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:31:55.901604   68084 api_server.go:72] duration metric: took 9.852691692s to wait for apiserver process to appear ...
	I0829 20:31:55.901628   68084 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:31:55.901643   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:31:55.905564   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 200:
	ok
	I0829 20:31:55.906387   68084 api_server.go:141] control plane version: v1.31.0
	I0829 20:31:55.906406   68084 api_server.go:131] duration metric: took 4.772472ms to wait for apiserver health ...
	I0829 20:31:55.906413   68084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:31:55.911423   68084 system_pods.go:59] 9 kube-system pods found
	I0829 20:31:55.911444   68084 system_pods.go:61] "coredns-6f6b679f8f-l25kd" [86947930-0d47-407a-b876-b482596fbe8f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:31:55.911451   68084 system_pods.go:61] "coredns-6f6b679f8f-lnm92" [a6caefe0-e883-4460-87de-25ee97191e1a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:31:55.911458   68084 system_pods.go:61] "etcd-default-k8s-diff-port-145096" [caba3f17-6544-4fe0-8dd3-0dd95e8df8ce] Running
	I0829 20:31:55.911465   68084 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-145096" [9b1ca00a-613b-414f-81e9-601d53d43207] Running
	I0829 20:31:55.911470   68084 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-145096" [e7145779-85cf-458d-9870-6fda4853d29d] Running
	I0829 20:31:55.911479   68084 system_pods.go:61] "kube-proxy-ptswc" [96c01414-e8e8-4731-824b-11d636285fb3] Running
	I0829 20:31:55.911488   68084 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-145096" [0d2cc607-72ac-4417-8a7c-196bf3ec90d7] Running
	I0829 20:31:55.911495   68084 system_pods.go:61] "metrics-server-6867b74b74-6sdqg" [2c9efadb-89bb-4aa6-b0f0-ddcb3e931674] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:31:55.911503   68084 system_pods.go:61] "storage-provisioner" [81531989-d045-44fb-b1a1-0817af27c804] Running
	I0829 20:31:55.911512   68084 system_pods.go:74] duration metric: took 5.092824ms to wait for pod list to return data ...
	I0829 20:31:55.911523   68084 default_sa.go:34] waiting for default service account to be created ...
	I0829 20:31:55.913794   68084 default_sa.go:45] found service account: "default"
	I0829 20:31:55.913820   68084 default_sa.go:55] duration metric: took 2.286925ms for default service account to be created ...
	I0829 20:31:55.913830   68084 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 20:31:55.919628   68084 system_pods.go:86] 9 kube-system pods found
	I0829 20:31:55.919666   68084 system_pods.go:89] "coredns-6f6b679f8f-l25kd" [86947930-0d47-407a-b876-b482596fbe8f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:31:55.919677   68084 system_pods.go:89] "coredns-6f6b679f8f-lnm92" [a6caefe0-e883-4460-87de-25ee97191e1a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:31:55.919686   68084 system_pods.go:89] "etcd-default-k8s-diff-port-145096" [caba3f17-6544-4fe0-8dd3-0dd95e8df8ce] Running
	I0829 20:31:55.919693   68084 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-145096" [9b1ca00a-613b-414f-81e9-601d53d43207] Running
	I0829 20:31:55.919699   68084 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-145096" [e7145779-85cf-458d-9870-6fda4853d29d] Running
	I0829 20:31:55.919704   68084 system_pods.go:89] "kube-proxy-ptswc" [96c01414-e8e8-4731-824b-11d636285fb3] Running
	I0829 20:31:55.919710   68084 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-145096" [0d2cc607-72ac-4417-8a7c-196bf3ec90d7] Running
	I0829 20:31:55.919718   68084 system_pods.go:89] "metrics-server-6867b74b74-6sdqg" [2c9efadb-89bb-4aa6-b0f0-ddcb3e931674] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:31:55.919725   68084 system_pods.go:89] "storage-provisioner" [81531989-d045-44fb-b1a1-0817af27c804] Running
	I0829 20:31:55.919734   68084 system_pods.go:126] duration metric: took 5.897752ms to wait for k8s-apps to be running ...
	I0829 20:31:55.919745   68084 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 20:31:55.919800   68084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:31:55.935429   68084 system_svc.go:56] duration metric: took 15.676316ms WaitForService to wait for kubelet
	I0829 20:31:55.935460   68084 kubeadm.go:582] duration metric: took 9.886551311s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:31:55.935483   68084 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:31:55.938444   68084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:31:55.938466   68084 node_conditions.go:123] node cpu capacity is 2
	I0829 20:31:55.938476   68084 node_conditions.go:105] duration metric: took 2.988434ms to run NodePressure ...
	I0829 20:31:55.938486   68084 start.go:241] waiting for startup goroutines ...
	I0829 20:31:55.938493   68084 start.go:246] waiting for cluster config update ...
	I0829 20:31:55.938503   68084 start.go:255] writing updated cluster config ...
	I0829 20:31:55.938834   68084 ssh_runner.go:195] Run: rm -f paused
	I0829 20:31:55.987879   68084 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 20:31:55.989766   68084 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-145096" cluster and "default" namespace by default
	I0829 20:32:03.506190   66841 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.161387814s)
	I0829 20:32:03.506268   66841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:32:03.530660   66841 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:32:03.550784   66841 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:32:03.565054   66841 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:32:03.565085   66841 kubeadm.go:157] found existing configuration files:
	
	I0829 20:32:03.565131   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:32:03.586492   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:32:03.586577   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:32:03.605061   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:32:03.617990   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:32:03.618054   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:32:03.635587   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:32:03.645495   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:32:03.645559   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:32:03.655081   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:32:03.664640   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:32:03.664703   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:32:03.674097   66841 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:32:03.721087   66841 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 20:32:03.721155   66841 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:32:03.839829   66841 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:32:03.839985   66841 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:32:03.840079   66841 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 20:32:03.849047   66841 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:32:03.850883   66841 out.go:235]   - Generating certificates and keys ...
	I0829 20:32:03.850970   66841 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:32:03.851045   66841 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:32:03.851129   66841 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:32:03.851222   66841 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:32:03.851292   66841 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:32:03.851340   66841 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:32:03.851399   66841 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:32:03.851450   66841 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:32:03.851515   66841 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:32:03.851620   66841 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:32:03.851687   66841 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:32:03.851755   66841 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:32:03.968189   66841 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:32:04.253016   66841 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 20:32:04.341190   66841 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:32:04.491607   66841 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:32:04.616753   66841 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:32:04.617354   66841 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:32:04.619961   66841 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:32:04.621690   66841 out.go:235]   - Booting up control plane ...
	I0829 20:32:04.621799   66841 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:32:04.621910   66841 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:32:04.622021   66841 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:32:04.643758   66841 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:32:04.650541   66841 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:32:04.650612   66841 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:32:04.786596   66841 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 20:32:04.786755   66841 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 20:32:05.788381   66841 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001614523s
	I0829 20:32:05.788512   66841 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 20:32:10.789752   66841 kubeadm.go:310] [api-check] The API server is healthy after 5.001571241s
	I0829 20:32:10.803237   66841 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 20:32:10.822640   66841 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 20:32:10.845744   66841 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 20:32:10.846050   66841 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-397724 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 20:32:10.856315   66841 kubeadm.go:310] [bootstrap-token] Using token: 3k2s43.7gy6mzkt91kkied7
	I0829 20:32:10.857834   66841 out.go:235]   - Configuring RBAC rules ...
	I0829 20:32:10.857947   66841 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 20:32:10.867339   66841 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 20:32:10.876522   66841 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 20:32:10.879786   66841 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 20:32:10.885043   66841 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 20:32:10.892077   66841 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 20:32:11.196796   66841 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 20:32:11.630072   66841 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 20:32:12.200197   66841 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 20:32:12.200232   66841 kubeadm.go:310] 
	I0829 20:32:12.200314   66841 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 20:32:12.200326   66841 kubeadm.go:310] 
	I0829 20:32:12.200406   66841 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 20:32:12.200416   66841 kubeadm.go:310] 
	I0829 20:32:12.200450   66841 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 20:32:12.200536   66841 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 20:32:12.200606   66841 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 20:32:12.200616   66841 kubeadm.go:310] 
	I0829 20:32:12.200687   66841 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 20:32:12.200700   66841 kubeadm.go:310] 
	I0829 20:32:12.200744   66841 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 20:32:12.200750   66841 kubeadm.go:310] 
	I0829 20:32:12.200793   66841 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 20:32:12.200861   66841 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 20:32:12.200918   66841 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 20:32:12.200924   66841 kubeadm.go:310] 
	I0829 20:32:12.201048   66841 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 20:32:12.201144   66841 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 20:32:12.201152   66841 kubeadm.go:310] 
	I0829 20:32:12.201255   66841 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3k2s43.7gy6mzkt91kkied7 \
	I0829 20:32:12.201373   66841 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef \
	I0829 20:32:12.201400   66841 kubeadm.go:310] 	--control-plane 
	I0829 20:32:12.201411   66841 kubeadm.go:310] 
	I0829 20:32:12.201487   66841 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 20:32:12.201495   66841 kubeadm.go:310] 
	I0829 20:32:12.201574   66841 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3k2s43.7gy6mzkt91kkied7 \
	I0829 20:32:12.201710   66841 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef 
	I0829 20:32:12.202900   66841 kubeadm.go:310] W0829 20:32:03.691334    3057 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 20:32:12.203223   66841 kubeadm.go:310] W0829 20:32:03.692151    3057 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 20:32:12.203339   66841 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:32:12.203366   66841 cni.go:84] Creating CNI manager for ""
	I0829 20:32:12.203381   66841 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:32:12.205733   66841 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:32:12.206905   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:32:12.218121   66841 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:32:12.237885   66841 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 20:32:12.237989   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:12.238006   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-397724 minikube.k8s.io/updated_at=2024_08_29T20_32_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033 minikube.k8s.io/name=no-preload-397724 minikube.k8s.io/primary=true
	I0829 20:32:12.282191   66841 ops.go:34] apiserver oom_adj: -16
	I0829 20:32:12.430006   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:12.930327   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:13.430210   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:13.930065   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:14.430163   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:14.930189   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:15.430677   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:15.930670   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:16.430943   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:16.549095   66841 kubeadm.go:1113] duration metric: took 4.311165714s to wait for elevateKubeSystemPrivileges
	I0829 20:32:16.549136   66841 kubeadm.go:394] duration metric: took 4m59.355577107s to StartCluster
	I0829 20:32:16.549156   66841 settings.go:142] acquiring lock: {Name:mka4cd5ddff5796cd0ca11509c181178f4f73529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:32:16.549229   66841 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:32:16.550926   66841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:32:16.551141   66841 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 20:32:16.551202   66841 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 20:32:16.551291   66841 addons.go:69] Setting storage-provisioner=true in profile "no-preload-397724"
	I0829 20:32:16.551315   66841 addons.go:69] Setting default-storageclass=true in profile "no-preload-397724"
	I0829 20:32:16.551329   66841 config.go:182] Loaded profile config "no-preload-397724": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:32:16.551340   66841 addons.go:69] Setting metrics-server=true in profile "no-preload-397724"
	I0829 20:32:16.551389   66841 addons.go:234] Setting addon metrics-server=true in "no-preload-397724"
	W0829 20:32:16.551404   66841 addons.go:243] addon metrics-server should already be in state true
	I0829 20:32:16.551442   66841 host.go:66] Checking if "no-preload-397724" exists ...
	I0829 20:32:16.551360   66841 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-397724"
	I0829 20:32:16.551324   66841 addons.go:234] Setting addon storage-provisioner=true in "no-preload-397724"
	W0829 20:32:16.551673   66841 addons.go:243] addon storage-provisioner should already be in state true
	I0829 20:32:16.551705   66841 host.go:66] Checking if "no-preload-397724" exists ...
	I0829 20:32:16.551872   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.551873   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.551908   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.551929   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.552036   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.552065   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.552634   66841 out.go:177] * Verifying Kubernetes components...
	I0829 20:32:16.553973   66841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:32:16.567797   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43335
	I0829 20:32:16.568321   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.568884   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.568910   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.569328   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.569941   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.569978   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.573055   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40673
	I0829 20:32:16.573642   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36399
	I0829 20:32:16.573770   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.574303   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.574321   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.574394   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.574913   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.574933   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.574935   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.575471   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.575511   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.575724   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.575950   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:32:16.579912   66841 addons.go:234] Setting addon default-storageclass=true in "no-preload-397724"
	W0829 20:32:16.579932   66841 addons.go:243] addon default-storageclass should already be in state true
	I0829 20:32:16.579960   66841 host.go:66] Checking if "no-preload-397724" exists ...
	I0829 20:32:16.580281   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.580298   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.591264   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42469
	I0829 20:32:16.591442   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42753
	I0829 20:32:16.591777   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.591827   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.592275   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.592289   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.592289   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.592307   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.592702   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.592726   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.592881   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:32:16.592882   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:32:16.594494   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:32:16.594956   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:32:16.596431   66841 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:32:16.596433   66841 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 20:32:16.597503   66841 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 20:32:16.597524   66841 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 20:32:16.597547   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:32:16.597607   66841 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:32:16.597625   66841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 20:32:16.597641   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:32:16.598780   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32841
	I0829 20:32:16.599272   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.599915   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.599937   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.601210   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.601613   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.601965   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.602159   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:32:16.602190   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.602328   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.602867   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.602998   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:32:16.603188   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:32:16.603234   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:32:16.603287   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.603434   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:32:16.603487   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:32:16.603691   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:32:16.603708   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:32:16.603857   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:32:16.603977   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:32:16.619336   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37683
	I0829 20:32:16.619806   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.620269   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.620286   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.620604   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.620818   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:32:16.622348   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:32:16.622563   66841 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 20:32:16.622580   66841 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 20:32:16.622597   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:32:16.625203   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.625542   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:32:16.625570   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.625746   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:32:16.625934   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:32:16.626094   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:32:16.626266   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:32:16.787525   66841 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:32:16.817674   66841 node_ready.go:35] waiting up to 6m0s for node "no-preload-397724" to be "Ready" ...
	I0829 20:32:16.833992   66841 node_ready.go:49] node "no-preload-397724" has status "Ready":"True"
	I0829 20:32:16.834030   66841 node_ready.go:38] duration metric: took 16.322874ms for node "no-preload-397724" to be "Ready" ...
	I0829 20:32:16.834042   66841 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:32:16.843147   66841 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-crgtj" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:16.902589   66841 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 20:32:16.902613   66841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 20:32:16.902859   66841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 20:32:16.903193   66841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:32:16.922497   66841 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 20:32:16.922518   66841 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 20:32:16.966207   66841 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:32:16.966240   66841 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 20:32:17.004882   66841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:32:17.204576   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.204613   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.204968   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.204987   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:17.204995   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.204994   66841 main.go:141] libmachine: (no-preload-397724) DBG | Closing plugin on server side
	I0829 20:32:17.205002   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.205261   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.205278   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:17.211789   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.211811   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.212074   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.212089   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:17.212119   66841 main.go:141] libmachine: (no-preload-397724) DBG | Closing plugin on server side
	I0829 20:32:17.902866   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.902897   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.903218   66841 main.go:141] libmachine: (no-preload-397724) DBG | Closing plugin on server side
	I0829 20:32:17.903266   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.903278   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:17.903286   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.903296   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.903556   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.903572   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:18.344211   66841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.33928059s)
	I0829 20:32:18.344259   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:18.344274   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:18.344571   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:18.344589   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:18.344611   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:18.344626   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:18.344948   66841 main.go:141] libmachine: (no-preload-397724) DBG | Closing plugin on server side
	I0829 20:32:18.344980   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:18.345010   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:18.345025   66841 addons.go:475] Verifying addon metrics-server=true in "no-preload-397724"
	I0829 20:32:18.346919   66841 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0829 20:32:18.348704   66841 addons.go:510] duration metric: took 1.797503952s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0829 20:32:18.850832   66841 pod_ready.go:93] pod "coredns-6f6b679f8f-crgtj" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:18.850853   66841 pod_ready.go:82] duration metric: took 2.007683093s for pod "coredns-6f6b679f8f-crgtj" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:18.850862   66841 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dw2r7" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.357679   66841 pod_ready.go:93] pod "coredns-6f6b679f8f-dw2r7" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.357702   66841 pod_ready.go:82] duration metric: took 1.506832539s for pod "coredns-6f6b679f8f-dw2r7" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.357710   66841 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.361830   66841 pod_ready.go:93] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.361854   66841 pod_ready.go:82] duration metric: took 4.136801ms for pod "etcd-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.361865   66841 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.365719   66841 pod_ready.go:93] pod "kube-apiserver-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.365733   66841 pod_ready.go:82] duration metric: took 3.861894ms for pod "kube-apiserver-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.365741   66841 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.369596   66841 pod_ready.go:93] pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.369611   66841 pod_ready.go:82] duration metric: took 3.864669ms for pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.369619   66841 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f4x4j" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.447788   66841 pod_ready.go:93] pod "kube-proxy-f4x4j" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.447812   66841 pod_ready.go:82] duration metric: took 78.187574ms for pod "kube-proxy-f4x4j" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.447823   66841 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:22.049084   66841 pod_ready.go:93] pod "kube-scheduler-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:22.049105   66841 pod_ready.go:82] duration metric: took 1.601276793s for pod "kube-scheduler-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:22.049113   66841 pod_ready.go:39] duration metric: took 5.215058301s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:32:22.049125   66841 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:32:22.049172   66841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:32:22.066060   66841 api_server.go:72] duration metric: took 5.514888299s to wait for apiserver process to appear ...
	I0829 20:32:22.066086   66841 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:32:22.066109   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:32:22.072343   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 200:
	ok
	I0829 20:32:22.073798   66841 api_server.go:141] control plane version: v1.31.0
	I0829 20:32:22.073821   66841 api_server.go:131] duration metric: took 7.728095ms to wait for apiserver health ...
	I0829 20:32:22.073828   66841 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:32:22.252273   66841 system_pods.go:59] 9 kube-system pods found
	I0829 20:32:22.252302   66841 system_pods.go:61] "coredns-6f6b679f8f-crgtj" [c48571a8-18ae-4737-a05b-4a77736aee35] Running
	I0829 20:32:22.252309   66841 system_pods.go:61] "coredns-6f6b679f8f-dw2r7" [6edda799-e2d6-402b-b4cd-7e54b2b89ca5] Running
	I0829 20:32:22.252315   66841 system_pods.go:61] "etcd-no-preload-397724" [15473208-a76c-4bc5-810f-e78d59538493] Running
	I0829 20:32:22.252320   66841 system_pods.go:61] "kube-apiserver-no-preload-397724" [521c6041-888f-4145-aabb-54da7382953d] Running
	I0829 20:32:22.252325   66841 system_pods.go:61] "kube-controller-manager-no-preload-397724" [fd5afaf8-898d-4985-8efc-5628709a52cd] Running
	I0829 20:32:22.252329   66841 system_pods.go:61] "kube-proxy-f4x4j" [eb76dc5a-016a-416c-8880-f76fc2d2a9bb] Running
	I0829 20:32:22.252333   66841 system_pods.go:61] "kube-scheduler-no-preload-397724" [77d9e2de-ee8e-4cb2-a7f0-5d9b96bd9691] Running
	I0829 20:32:22.252342   66841 system_pods.go:61] "metrics-server-6867b74b74-nxdc5" [6061e81d-2f14-4c4a-9e0f-acb57dc9fb5a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:32:22.252348   66841 system_pods.go:61] "storage-provisioner" [8b6c02d6-7a39-4fea-80b4-4ba02904232c] Running
	I0829 20:32:22.252358   66841 system_pods.go:74] duration metric: took 178.523887ms to wait for pod list to return data ...
	I0829 20:32:22.252370   66841 default_sa.go:34] waiting for default service account to be created ...
	I0829 20:32:22.448475   66841 default_sa.go:45] found service account: "default"
	I0829 20:32:22.448499   66841 default_sa.go:55] duration metric: took 196.123693ms for default service account to be created ...
	I0829 20:32:22.448508   66841 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 20:32:22.650996   66841 system_pods.go:86] 9 kube-system pods found
	I0829 20:32:22.651023   66841 system_pods.go:89] "coredns-6f6b679f8f-crgtj" [c48571a8-18ae-4737-a05b-4a77736aee35] Running
	I0829 20:32:22.651029   66841 system_pods.go:89] "coredns-6f6b679f8f-dw2r7" [6edda799-e2d6-402b-b4cd-7e54b2b89ca5] Running
	I0829 20:32:22.651033   66841 system_pods.go:89] "etcd-no-preload-397724" [15473208-a76c-4bc5-810f-e78d59538493] Running
	I0829 20:32:22.651037   66841 system_pods.go:89] "kube-apiserver-no-preload-397724" [521c6041-888f-4145-aabb-54da7382953d] Running
	I0829 20:32:22.651042   66841 system_pods.go:89] "kube-controller-manager-no-preload-397724" [fd5afaf8-898d-4985-8efc-5628709a52cd] Running
	I0829 20:32:22.651045   66841 system_pods.go:89] "kube-proxy-f4x4j" [eb76dc5a-016a-416c-8880-f76fc2d2a9bb] Running
	I0829 20:32:22.651048   66841 system_pods.go:89] "kube-scheduler-no-preload-397724" [77d9e2de-ee8e-4cb2-a7f0-5d9b96bd9691] Running
	I0829 20:32:22.651054   66841 system_pods.go:89] "metrics-server-6867b74b74-nxdc5" [6061e81d-2f14-4c4a-9e0f-acb57dc9fb5a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:32:22.651058   66841 system_pods.go:89] "storage-provisioner" [8b6c02d6-7a39-4fea-80b4-4ba02904232c] Running
	I0829 20:32:22.651065   66841 system_pods.go:126] duration metric: took 202.552304ms to wait for k8s-apps to be running ...
	I0829 20:32:22.651071   66841 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 20:32:22.651111   66841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:32:22.666831   66841 system_svc.go:56] duration metric: took 15.753046ms WaitForService to wait for kubelet
	I0829 20:32:22.666863   66841 kubeadm.go:582] duration metric: took 6.115692499s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:32:22.666888   66841 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:32:22.848742   66841 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:32:22.848766   66841 node_conditions.go:123] node cpu capacity is 2
	I0829 20:32:22.848777   66841 node_conditions.go:105] duration metric: took 181.884368ms to run NodePressure ...
	I0829 20:32:22.848787   66841 start.go:241] waiting for startup goroutines ...
	I0829 20:32:22.848794   66841 start.go:246] waiting for cluster config update ...
	I0829 20:32:22.848803   66841 start.go:255] writing updated cluster config ...
	I0829 20:32:22.849030   66841 ssh_runner.go:195] Run: rm -f paused
	I0829 20:32:22.897503   66841 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 20:32:22.899404   66841 out.go:177] * Done! kubectl is now configured to use "no-preload-397724" cluster and "default" namespace by default
	I0829 20:32:29.924469   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:32:29.924707   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:32:29.924729   67607 kubeadm.go:310] 
	I0829 20:32:29.924801   67607 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 20:32:29.924855   67607 kubeadm.go:310] 		timed out waiting for the condition
	I0829 20:32:29.924865   67607 kubeadm.go:310] 
	I0829 20:32:29.924912   67607 kubeadm.go:310] 	This error is likely caused by:
	I0829 20:32:29.924960   67607 kubeadm.go:310] 		- The kubelet is not running
	I0829 20:32:29.925080   67607 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 20:32:29.925090   67607 kubeadm.go:310] 
	I0829 20:32:29.925207   67607 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 20:32:29.925256   67607 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 20:32:29.925316   67607 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 20:32:29.925342   67607 kubeadm.go:310] 
	I0829 20:32:29.925493   67607 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 20:32:29.925616   67607 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 20:32:29.925627   67607 kubeadm.go:310] 
	I0829 20:32:29.925776   67607 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 20:32:29.925909   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 20:32:29.926016   67607 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 20:32:29.926134   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 20:32:29.926154   67607 kubeadm.go:310] 
	I0829 20:32:29.926605   67607 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:32:29.926723   67607 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 20:32:29.926812   67607 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0829 20:32:29.926935   67607 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0829 20:32:29.926979   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:32:30.389951   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:32:30.408455   67607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:32:30.418493   67607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:32:30.418513   67607 kubeadm.go:157] found existing configuration files:
	
	I0829 20:32:30.418582   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:32:30.427909   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:32:30.427957   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:32:30.437122   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:32:30.446157   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:32:30.446203   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:32:30.455480   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:32:30.464781   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:32:30.464834   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:32:30.474607   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:32:30.484537   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:32:30.484601   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:32:30.494170   67607 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:32:30.717349   67607 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:34:26.784436   67607 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 20:34:26.784518   67607 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0829 20:34:26.786158   67607 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 20:34:26.786196   67607 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:34:26.786276   67607 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:34:26.786353   67607 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:34:26.786437   67607 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 20:34:26.786486   67607 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:34:26.788271   67607 out.go:235]   - Generating certificates and keys ...
	I0829 20:34:26.788380   67607 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:34:26.788453   67607 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:34:26.788523   67607 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:34:26.788593   67607 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:34:26.788665   67607 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:34:26.788714   67607 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:34:26.788769   67607 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:34:26.788826   67607 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:34:26.788894   67607 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:34:26.788961   67607 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:34:26.788993   67607 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:34:26.789044   67607 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:34:26.789084   67607 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:34:26.789143   67607 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:34:26.789228   67607 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:34:26.789312   67607 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:34:26.789441   67607 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:34:26.789577   67607 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:34:26.789647   67607 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:34:26.789717   67607 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:34:26.791166   67607 out.go:235]   - Booting up control plane ...
	I0829 20:34:26.791239   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:34:26.791305   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:34:26.791382   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:34:26.791465   67607 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:34:26.791597   67607 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 20:34:26.791658   67607 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 20:34:26.791736   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.791926   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792008   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.792182   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792254   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.792435   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792492   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.792725   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792798   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.793026   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.793043   67607 kubeadm.go:310] 
	I0829 20:34:26.793091   67607 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 20:34:26.793148   67607 kubeadm.go:310] 		timed out waiting for the condition
	I0829 20:34:26.793159   67607 kubeadm.go:310] 
	I0829 20:34:26.793188   67607 kubeadm.go:310] 	This error is likely caused by:
	I0829 20:34:26.793219   67607 kubeadm.go:310] 		- The kubelet is not running
	I0829 20:34:26.793305   67607 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 20:34:26.793314   67607 kubeadm.go:310] 
	I0829 20:34:26.793438   67607 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 20:34:26.793483   67607 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 20:34:26.793515   67607 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 20:34:26.793522   67607 kubeadm.go:310] 
	I0829 20:34:26.793618   67607 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 20:34:26.793735   67607 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 20:34:26.793748   67607 kubeadm.go:310] 
	I0829 20:34:26.793895   67607 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 20:34:26.794020   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 20:34:26.794125   67607 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 20:34:26.794227   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 20:34:26.794285   67607 kubeadm.go:310] 
	I0829 20:34:26.794300   67607 kubeadm.go:394] duration metric: took 7m57.183485424s to StartCluster
	I0829 20:34:26.794357   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:34:26.794410   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:34:26.837033   67607 cri.go:89] found id: ""
	I0829 20:34:26.837072   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.837083   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:34:26.837091   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:34:26.837153   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:34:26.871177   67607 cri.go:89] found id: ""
	I0829 20:34:26.871203   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.871213   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:34:26.871220   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:34:26.871280   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:34:26.905409   67607 cri.go:89] found id: ""
	I0829 20:34:26.905432   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.905442   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:34:26.905450   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:34:26.905509   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:34:26.940119   67607 cri.go:89] found id: ""
	I0829 20:34:26.940150   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.940161   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:34:26.940169   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:34:26.940217   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:34:26.974555   67607 cri.go:89] found id: ""
	I0829 20:34:26.974589   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.974601   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:34:26.974608   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:34:26.974674   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:34:27.010586   67607 cri.go:89] found id: ""
	I0829 20:34:27.010616   67607 logs.go:276] 0 containers: []
	W0829 20:34:27.010631   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:34:27.010639   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:34:27.010704   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:34:27.044867   67607 cri.go:89] found id: ""
	I0829 20:34:27.044900   67607 logs.go:276] 0 containers: []
	W0829 20:34:27.044913   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:34:27.044921   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:34:27.044979   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:34:27.079282   67607 cri.go:89] found id: ""
	I0829 20:34:27.079308   67607 logs.go:276] 0 containers: []
	W0829 20:34:27.079316   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:34:27.079323   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:34:27.079335   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:34:27.093455   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:34:27.093485   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:34:27.179256   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:34:27.179280   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:34:27.179292   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:34:27.305873   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:34:27.305906   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:34:27.349676   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:34:27.349702   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 20:34:27.399787   67607 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0829 20:34:27.399851   67607 out.go:270] * 
	W0829 20:34:27.399907   67607 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 20:34:27.399919   67607 out.go:270] * 
	W0829 20:34:27.400631   67607 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 20:34:27.403773   67607 out.go:201] 
	W0829 20:34:27.404902   67607 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 20:34:27.404953   67607 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0829 20:34:27.404981   67607 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0829 20:34:27.406310   67607 out.go:201] 
	
	
	==> CRI-O <==
	Aug 29 20:40:58 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:40:58.047491494Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964058047469062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c69f7eb8-56e3-4d99-9803-319d16c871ae name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:40:58 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:40:58.048448860Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=73033aca-d425-474e-aa6f-e5a71f78fcb9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:40:58 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:40:58.048515861Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=73033aca-d425-474e-aa6f-e5a71f78fcb9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:40:58 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:40:58.048780079Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e74650e815d0ebb9e571fffeb67d5daf0eecc3b9277d002bf215d8c23e746ce1,PodSandboxId:6314bd63c8faffd7a2132769f0b5566b225309eac204c502496a1d9009058d71,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724963508150201334,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81531989-d045-44fb-b1a1-0817af27c804,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e00316ba74bfecb01e600a5b225e97d007f7e808c279766683e5ffc0d89b5b7,PodSandboxId:cf42cf6b2bc99285118aedd1a788d3985775a28b6e61ea8ca14ccd3e32ae3f03,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963507706111777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-l25kd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86947930-0d47-407a-b876-b482596fbe8f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b14158df556054b9512a278737e089135111eb66e6c7704568db076062574121,PodSandboxId:0735c91e139826b75f188c2b1ee3d528c8d08871ecd4074253ef8afe27cc6394,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963507555614630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lnm92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a6caefe0-e883-4460-87de-25ee97191e1a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb536f9758a829bd1712db0f4afcb55637f0ae9c60271ae7fd453ef123c2f3d8,PodSandboxId:3d30ef69309a1781dd6ecf6e58ecf1a01f73e66ad2340217612d1bc2541cfacb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724963507022111607,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ptswc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96c01414-e8e8-4731-824b-11d636285fb3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73a7bd61a6fe654d4ad5c149a10789b03edc6d49d5d95bef662753f186c0f929,PodSandboxId:c6c9318f8ce085f432f5cf94524fe98fcddfbd1c738bf51adc0515e55053320b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724963495960510210
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5e67ab7070b5ee816dfb9f010341b41,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73b74ec8a00731f45de32583d0f603e164ce0d29fc981ba9d8539c1c794612a0,PodSandboxId:0d7a1fcbe06bde122d266507f385676279485ca5e151bf683e3aadf5f916a152,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724963495992555346,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5832a6d2361afe00d8adcf51f306780e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882e84e9fa32f87b2b6ddae42319c25903c8398224a894c8499553878bc782ab,PodSandboxId:397f31c2e89b9c9daf0dad789a94e7007ae4b3e643978e4882a785794fe07f12,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:17249634959229
98565,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a534bbc2142697d334cc8b549bf3b1f2,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fe0c33110958bd07c8bba63fecb131e682266c5d51683606fc412ffa9e2be04,PodSandboxId:3ca860b0d95ccc4fe54c1384cbfbf5d044111672c87d26d1347e8deae4a19820,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17249634
95853956905,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f880e143b217d3e5f7e4426cfaeb999,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da6d34394831076ef7f414268020afd8668b079b4c58634f4ff73b97a538b7c4,PodSandboxId:2c773eb8560fd46c5f4c95aa7ad228b7d284855a0831a838a8579814e2c31766,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724963208675808831,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5832a6d2361afe00d8adcf51f306780e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=73033aca-d425-474e-aa6f-e5a71f78fcb9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:40:58 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:40:58.093756125Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b4578286-fc86-4986-ab1c-2b9a5848823d name=/runtime.v1.RuntimeService/Version
	Aug 29 20:40:58 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:40:58.093826728Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b4578286-fc86-4986-ab1c-2b9a5848823d name=/runtime.v1.RuntimeService/Version
	Aug 29 20:40:58 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:40:58.095876252Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=43510919-a770-40e7-899e-b4fe9ef8f1be name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:40:58 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:40:58.096268024Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964058096246762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43510919-a770-40e7-899e-b4fe9ef8f1be name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:40:58 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:40:58.096859442Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4712d579-9957-429c-900b-7a2eec50d143 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:40:58 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:40:58.096928642Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4712d579-9957-429c-900b-7a2eec50d143 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:40:58 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:40:58.097135798Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e74650e815d0ebb9e571fffeb67d5daf0eecc3b9277d002bf215d8c23e746ce1,PodSandboxId:6314bd63c8faffd7a2132769f0b5566b225309eac204c502496a1d9009058d71,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724963508150201334,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81531989-d045-44fb-b1a1-0817af27c804,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e00316ba74bfecb01e600a5b225e97d007f7e808c279766683e5ffc0d89b5b7,PodSandboxId:cf42cf6b2bc99285118aedd1a788d3985775a28b6e61ea8ca14ccd3e32ae3f03,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963507706111777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-l25kd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86947930-0d47-407a-b876-b482596fbe8f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b14158df556054b9512a278737e089135111eb66e6c7704568db076062574121,PodSandboxId:0735c91e139826b75f188c2b1ee3d528c8d08871ecd4074253ef8afe27cc6394,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963507555614630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lnm92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a6caefe0-e883-4460-87de-25ee97191e1a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb536f9758a829bd1712db0f4afcb55637f0ae9c60271ae7fd453ef123c2f3d8,PodSandboxId:3d30ef69309a1781dd6ecf6e58ecf1a01f73e66ad2340217612d1bc2541cfacb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724963507022111607,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ptswc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96c01414-e8e8-4731-824b-11d636285fb3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73a7bd61a6fe654d4ad5c149a10789b03edc6d49d5d95bef662753f186c0f929,PodSandboxId:c6c9318f8ce085f432f5cf94524fe98fcddfbd1c738bf51adc0515e55053320b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724963495960510210
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5e67ab7070b5ee816dfb9f010341b41,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73b74ec8a00731f45de32583d0f603e164ce0d29fc981ba9d8539c1c794612a0,PodSandboxId:0d7a1fcbe06bde122d266507f385676279485ca5e151bf683e3aadf5f916a152,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724963495992555346,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5832a6d2361afe00d8adcf51f306780e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882e84e9fa32f87b2b6ddae42319c25903c8398224a894c8499553878bc782ab,PodSandboxId:397f31c2e89b9c9daf0dad789a94e7007ae4b3e643978e4882a785794fe07f12,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:17249634959229
98565,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a534bbc2142697d334cc8b549bf3b1f2,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fe0c33110958bd07c8bba63fecb131e682266c5d51683606fc412ffa9e2be04,PodSandboxId:3ca860b0d95ccc4fe54c1384cbfbf5d044111672c87d26d1347e8deae4a19820,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17249634
95853956905,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f880e143b217d3e5f7e4426cfaeb999,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da6d34394831076ef7f414268020afd8668b079b4c58634f4ff73b97a538b7c4,PodSandboxId:2c773eb8560fd46c5f4c95aa7ad228b7d284855a0831a838a8579814e2c31766,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724963208675808831,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5832a6d2361afe00d8adcf51f306780e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4712d579-9957-429c-900b-7a2eec50d143 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:40:58 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:40:58.137091998Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=293c7433-7258-4fef-b976-0267225aa606 name=/runtime.v1.RuntimeService/Version
	Aug 29 20:40:58 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:40:58.137213679Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=293c7433-7258-4fef-b976-0267225aa606 name=/runtime.v1.RuntimeService/Version
	Aug 29 20:40:58 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:40:58.138443591Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9af11591-de30-49b2-84b4-10207dfdd2e8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:40:58 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:40:58.139176791Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964058139146062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9af11591-de30-49b2-84b4-10207dfdd2e8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:40:58 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:40:58.140902340Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7efff610-c079-4b72-b9b3-988de2458154 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:40:58 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:40:58.141620970Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7efff610-c079-4b72-b9b3-988de2458154 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:40:58 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:40:58.142887389Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e74650e815d0ebb9e571fffeb67d5daf0eecc3b9277d002bf215d8c23e746ce1,PodSandboxId:6314bd63c8faffd7a2132769f0b5566b225309eac204c502496a1d9009058d71,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724963508150201334,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81531989-d045-44fb-b1a1-0817af27c804,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e00316ba74bfecb01e600a5b225e97d007f7e808c279766683e5ffc0d89b5b7,PodSandboxId:cf42cf6b2bc99285118aedd1a788d3985775a28b6e61ea8ca14ccd3e32ae3f03,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963507706111777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-l25kd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86947930-0d47-407a-b876-b482596fbe8f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b14158df556054b9512a278737e089135111eb66e6c7704568db076062574121,PodSandboxId:0735c91e139826b75f188c2b1ee3d528c8d08871ecd4074253ef8afe27cc6394,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963507555614630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lnm92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a6caefe0-e883-4460-87de-25ee97191e1a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb536f9758a829bd1712db0f4afcb55637f0ae9c60271ae7fd453ef123c2f3d8,PodSandboxId:3d30ef69309a1781dd6ecf6e58ecf1a01f73e66ad2340217612d1bc2541cfacb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724963507022111607,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ptswc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96c01414-e8e8-4731-824b-11d636285fb3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73a7bd61a6fe654d4ad5c149a10789b03edc6d49d5d95bef662753f186c0f929,PodSandboxId:c6c9318f8ce085f432f5cf94524fe98fcddfbd1c738bf51adc0515e55053320b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724963495960510210
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5e67ab7070b5ee816dfb9f010341b41,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73b74ec8a00731f45de32583d0f603e164ce0d29fc981ba9d8539c1c794612a0,PodSandboxId:0d7a1fcbe06bde122d266507f385676279485ca5e151bf683e3aadf5f916a152,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724963495992555346,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5832a6d2361afe00d8adcf51f306780e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882e84e9fa32f87b2b6ddae42319c25903c8398224a894c8499553878bc782ab,PodSandboxId:397f31c2e89b9c9daf0dad789a94e7007ae4b3e643978e4882a785794fe07f12,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:17249634959229
98565,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a534bbc2142697d334cc8b549bf3b1f2,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fe0c33110958bd07c8bba63fecb131e682266c5d51683606fc412ffa9e2be04,PodSandboxId:3ca860b0d95ccc4fe54c1384cbfbf5d044111672c87d26d1347e8deae4a19820,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17249634
95853956905,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f880e143b217d3e5f7e4426cfaeb999,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da6d34394831076ef7f414268020afd8668b079b4c58634f4ff73b97a538b7c4,PodSandboxId:2c773eb8560fd46c5f4c95aa7ad228b7d284855a0831a838a8579814e2c31766,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724963208675808831,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5832a6d2361afe00d8adcf51f306780e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7efff610-c079-4b72-b9b3-988de2458154 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:40:58 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:40:58.190877625Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=508d87ed-03d5-4a41-9537-8044fa45903b name=/runtime.v1.RuntimeService/Version
	Aug 29 20:40:58 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:40:58.190980534Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=508d87ed-03d5-4a41-9537-8044fa45903b name=/runtime.v1.RuntimeService/Version
	Aug 29 20:40:58 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:40:58.192534703Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=260a0b92-c583-4147-a12a-f07b4ccf9592 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:40:58 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:40:58.193177116Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964058193152994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=260a0b92-c583-4147-a12a-f07b4ccf9592 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:40:58 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:40:58.193751566Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=75cd23fd-39ce-4994-b8a5-aefac3a4555d name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:40:58 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:40:58.193849144Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=75cd23fd-39ce-4994-b8a5-aefac3a4555d name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:40:58 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:40:58.194047710Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e74650e815d0ebb9e571fffeb67d5daf0eecc3b9277d002bf215d8c23e746ce1,PodSandboxId:6314bd63c8faffd7a2132769f0b5566b225309eac204c502496a1d9009058d71,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724963508150201334,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81531989-d045-44fb-b1a1-0817af27c804,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e00316ba74bfecb01e600a5b225e97d007f7e808c279766683e5ffc0d89b5b7,PodSandboxId:cf42cf6b2bc99285118aedd1a788d3985775a28b6e61ea8ca14ccd3e32ae3f03,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963507706111777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-l25kd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86947930-0d47-407a-b876-b482596fbe8f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b14158df556054b9512a278737e089135111eb66e6c7704568db076062574121,PodSandboxId:0735c91e139826b75f188c2b1ee3d528c8d08871ecd4074253ef8afe27cc6394,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963507555614630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lnm92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a6caefe0-e883-4460-87de-25ee97191e1a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb536f9758a829bd1712db0f4afcb55637f0ae9c60271ae7fd453ef123c2f3d8,PodSandboxId:3d30ef69309a1781dd6ecf6e58ecf1a01f73e66ad2340217612d1bc2541cfacb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724963507022111607,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ptswc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96c01414-e8e8-4731-824b-11d636285fb3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73a7bd61a6fe654d4ad5c149a10789b03edc6d49d5d95bef662753f186c0f929,PodSandboxId:c6c9318f8ce085f432f5cf94524fe98fcddfbd1c738bf51adc0515e55053320b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724963495960510210
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5e67ab7070b5ee816dfb9f010341b41,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73b74ec8a00731f45de32583d0f603e164ce0d29fc981ba9d8539c1c794612a0,PodSandboxId:0d7a1fcbe06bde122d266507f385676279485ca5e151bf683e3aadf5f916a152,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724963495992555346,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5832a6d2361afe00d8adcf51f306780e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882e84e9fa32f87b2b6ddae42319c25903c8398224a894c8499553878bc782ab,PodSandboxId:397f31c2e89b9c9daf0dad789a94e7007ae4b3e643978e4882a785794fe07f12,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:17249634959229
98565,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a534bbc2142697d334cc8b549bf3b1f2,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fe0c33110958bd07c8bba63fecb131e682266c5d51683606fc412ffa9e2be04,PodSandboxId:3ca860b0d95ccc4fe54c1384cbfbf5d044111672c87d26d1347e8deae4a19820,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17249634
95853956905,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f880e143b217d3e5f7e4426cfaeb999,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da6d34394831076ef7f414268020afd8668b079b4c58634f4ff73b97a538b7c4,PodSandboxId:2c773eb8560fd46c5f4c95aa7ad228b7d284855a0831a838a8579814e2c31766,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724963208675808831,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5832a6d2361afe00d8adcf51f306780e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=75cd23fd-39ce-4994-b8a5-aefac3a4555d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e74650e815d0e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   6314bd63c8faf       storage-provisioner
	9e00316ba74bf       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   cf42cf6b2bc99       coredns-6f6b679f8f-l25kd
	b14158df55605       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   0735c91e13982       coredns-6f6b679f8f-lnm92
	fb536f9758a82       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   3d30ef69309a1       kube-proxy-ptswc
	73b74ec8a0073       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   0d7a1fcbe06bd       kube-apiserver-default-k8s-diff-port-145096
	73a7bd61a6fe6       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   c6c9318f8ce08       kube-scheduler-default-k8s-diff-port-145096
	882e84e9fa32f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   397f31c2e89b9       kube-controller-manager-default-k8s-diff-port-145096
	0fe0c33110958       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   3ca860b0d95cc       etcd-default-k8s-diff-port-145096
	da6d343948310       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   2c773eb8560fd       kube-apiserver-default-k8s-diff-port-145096
	
	
	==> coredns [9e00316ba74bfecb01e600a5b225e97d007f7e808c279766683e5ffc0d89b5b7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [b14158df556054b9512a278737e089135111eb66e6c7704568db076062574121] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-145096
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-145096
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=default-k8s-diff-port-145096
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T20_31_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 20:31:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-145096
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 20:40:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 20:36:58 +0000   Thu, 29 Aug 2024 20:31:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 20:36:58 +0000   Thu, 29 Aug 2024 20:31:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 20:36:58 +0000   Thu, 29 Aug 2024 20:31:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 20:36:58 +0000   Thu, 29 Aug 2024 20:31:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.140
	  Hostname:    default-k8s-diff-port-145096
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2afa83c73ada46a2971bb4d5d93e2336
	  System UUID:                2afa83c7-3ada-46a2-971b-b4d5d93e2336
	  Boot ID:                    f8846589-9d1a-4563-949d-ad4a4ac61d53
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-l25kd                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 coredns-6f6b679f8f-lnm92                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 etcd-default-k8s-diff-port-145096                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m17s
	  kube-system                 kube-apiserver-default-k8s-diff-port-145096             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-145096    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-proxy-ptswc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-scheduler-default-k8s-diff-port-145096             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 metrics-server-6867b74b74-6sdqg                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m11s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m10s  kube-proxy       
	  Normal  Starting                 9m17s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s  kubelet          Node default-k8s-diff-port-145096 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s  kubelet          Node default-k8s-diff-port-145096 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s  kubelet          Node default-k8s-diff-port-145096 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m13s  node-controller  Node default-k8s-diff-port-145096 event: Registered Node default-k8s-diff-port-145096 in Controller
	
	
	==> dmesg <==
	[  +0.060252] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.050563] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.098792] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.463652] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.560476] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.055091] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.063289] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059637] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.227807] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.141345] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.310908] systemd-fstab-generator[704]: Ignoring "noauto" option for root device
	[  +4.284938] systemd-fstab-generator[794]: Ignoring "noauto" option for root device
	[  +0.067478] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.874590] systemd-fstab-generator[916]: Ignoring "noauto" option for root device
	[  +4.683226] kauditd_printk_skb: 97 callbacks suppressed
	[Aug29 20:27] kauditd_printk_skb: 90 callbacks suppressed
	[Aug29 20:31] systemd-fstab-generator[2539]: Ignoring "noauto" option for root device
	[  +0.067514] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.497110] systemd-fstab-generator[2863]: Ignoring "noauto" option for root device
	[  +0.090496] kauditd_printk_skb: 54 callbacks suppressed
	[  +4.793672] systemd-fstab-generator[2976]: Ignoring "noauto" option for root device
	[  +0.731391] kauditd_printk_skb: 34 callbacks suppressed
	[  +9.265117] kauditd_printk_skb: 62 callbacks suppressed
	
	
	==> etcd [0fe0c33110958bd07c8bba63fecb131e682266c5d51683606fc412ffa9e2be04] <==
	{"level":"info","ts":"2024-08-29T20:31:36.248717Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-29T20:31:36.249099Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.140:2380"}
	{"level":"info","ts":"2024-08-29T20:31:36.249132Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.140:2380"}
	{"level":"info","ts":"2024-08-29T20:31:36.250109Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"bc75878aaf44c549","initial-advertise-peer-urls":["https://192.168.72.140:2380"],"listen-peer-urls":["https://192.168.72.140:2380"],"advertise-client-urls":["https://192.168.72.140:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.140:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-29T20:31:36.250147Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-29T20:31:36.711472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bc75878aaf44c549 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-29T20:31:36.711522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bc75878aaf44c549 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-29T20:31:36.711557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bc75878aaf44c549 received MsgPreVoteResp from bc75878aaf44c549 at term 1"}
	{"level":"info","ts":"2024-08-29T20:31:36.711622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bc75878aaf44c549 became candidate at term 2"}
	{"level":"info","ts":"2024-08-29T20:31:36.711630Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bc75878aaf44c549 received MsgVoteResp from bc75878aaf44c549 at term 2"}
	{"level":"info","ts":"2024-08-29T20:31:36.711639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bc75878aaf44c549 became leader at term 2"}
	{"level":"info","ts":"2024-08-29T20:31:36.711646Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: bc75878aaf44c549 elected leader bc75878aaf44c549 at term 2"}
	{"level":"info","ts":"2024-08-29T20:31:36.717668Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T20:31:36.720784Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"bc75878aaf44c549","local-member-attributes":"{Name:default-k8s-diff-port-145096 ClientURLs:[https://192.168.72.140:2379]}","request-path":"/0/members/bc75878aaf44c549/attributes","cluster-id":"3501d2cdd2f1863a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-29T20:31:36.720840Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T20:31:36.722620Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T20:31:36.723336Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T20:31:36.737863Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-29T20:31:36.729652Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3501d2cdd2f1863a","local-member-id":"bc75878aaf44c549","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T20:31:36.730272Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T20:31:36.740254Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-29T20:31:36.768752Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-29T20:31:36.768907Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T20:31:36.768956Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T20:31:36.776750Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.140:2379"}
	
	
	==> kernel <==
	 20:40:58 up 14 min,  0 users,  load average: 0.06, 0.22, 0.15
	Linux default-k8s-diff-port-145096 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [73b74ec8a00731f45de32583d0f603e164ce0d29fc981ba9d8539c1c794612a0] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0829 20:36:39.594005       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 20:36:39.594053       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0829 20:36:39.595110       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 20:36:39.595144       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0829 20:37:39.596335       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 20:37:39.596666       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0829 20:37:39.596838       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 20:37:39.596948       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0829 20:37:39.597839       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 20:37:39.598919       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0829 20:39:39.598879       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 20:39:39.599253       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0829 20:39:39.599378       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 20:39:39.599431       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0829 20:39:39.600457       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 20:39:39.600533       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [da6d34394831076ef7f414268020afd8668b079b4c58634f4ff73b97a538b7c4] <==
	W0829 20:31:28.649694       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:28.698204       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:28.716983       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:28.755753       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:28.790787       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:28.800857       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:28.820361       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:28.843784       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:28.875641       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:28.913253       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.007707       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.018169       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.019510       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.041188       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.052810       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.064440       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.179219       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.196063       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.212515       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.266778       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.281148       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.424672       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.458184       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.470653       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.821268       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [882e84e9fa32f87b2b6ddae42319c25903c8398224a894c8499553878bc782ab] <==
	E0829 20:35:45.643545       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:35:46.074532       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:36:15.650273       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:36:16.081661       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:36:45.656927       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:36:46.089258       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0829 20:36:58.159781       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-145096"
	E0829 20:37:15.663528       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:37:16.097226       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0829 20:37:43.556029       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="364.792µs"
	E0829 20:37:45.670858       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:37:46.105093       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0829 20:37:55.555180       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="215.154µs"
	E0829 20:38:15.677390       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:38:16.112914       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:38:45.685485       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:38:46.120938       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:39:15.691860       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:39:16.129612       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:39:45.700625       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:39:46.139775       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:40:15.707888       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:40:16.151250       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:40:45.714974       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:40:46.159054       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [fb536f9758a829bd1712db0f4afcb55637f0ae9c60271ae7fd453ef123c2f3d8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 20:31:47.487144       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 20:31:47.598392       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.140"]
	E0829 20:31:47.598484       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 20:31:48.104791       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 20:31:48.104831       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 20:31:48.104855       1 server_linux.go:169] "Using iptables Proxier"
	I0829 20:31:48.124657       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 20:31:48.124967       1 server.go:483] "Version info" version="v1.31.0"
	I0829 20:31:48.124978       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 20:31:48.127410       1 config.go:197] "Starting service config controller"
	I0829 20:31:48.127426       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 20:31:48.127445       1 config.go:104] "Starting endpoint slice config controller"
	I0829 20:31:48.127449       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 20:31:48.127858       1 config.go:326] "Starting node config controller"
	I0829 20:31:48.127867       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 20:31:48.229739       1 shared_informer.go:320] Caches are synced for service config
	I0829 20:31:48.229816       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 20:31:48.237673       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [73a7bd61a6fe654d4ad5c149a10789b03edc6d49d5d95bef662753f186c0f929] <==
	W0829 20:31:39.490839       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0829 20:31:39.490990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 20:31:39.575902       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0829 20:31:39.576012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 20:31:39.737398       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0829 20:31:39.737522       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0829 20:31:39.753614       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0829 20:31:39.753672       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 20:31:39.800905       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0829 20:31:39.800957       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 20:31:39.806932       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0829 20:31:39.806985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 20:31:39.857127       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 20:31:39.857255       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 20:31:39.883095       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0829 20:31:39.884716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 20:31:39.894850       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0829 20:31:39.895087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 20:31:39.897205       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 20:31:39.897265       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 20:31:39.902253       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0829 20:31:39.902334       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 20:31:40.066810       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0829 20:31:40.066858       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0829 20:31:41.831974       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 20:39:51 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:39:51.687352    2870 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724963991687024013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:39:51 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:39:51.687396    2870 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724963991687024013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:39:54 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:39:54.538703    2870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6sdqg" podUID="2c9efadb-89bb-4aa6-b0f0-ddcb3e931674"
	Aug 29 20:40:01 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:40:01.692467    2870 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964001691234817,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:40:01 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:40:01.693118    2870 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964001691234817,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:40:05 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:40:05.538107    2870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6sdqg" podUID="2c9efadb-89bb-4aa6-b0f0-ddcb3e931674"
	Aug 29 20:40:11 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:40:11.695156    2870 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964011694874903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:40:11 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:40:11.695428    2870 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964011694874903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:40:16 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:40:16.537788    2870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6sdqg" podUID="2c9efadb-89bb-4aa6-b0f0-ddcb3e931674"
	Aug 29 20:40:21 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:40:21.696798    2870 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964021696450235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:40:21 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:40:21.697229    2870 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964021696450235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:40:29 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:40:29.539493    2870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6sdqg" podUID="2c9efadb-89bb-4aa6-b0f0-ddcb3e931674"
	Aug 29 20:40:31 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:40:31.699472    2870 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964031699112078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:40:31 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:40:31.700002    2870 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964031699112078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:40:40 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:40:40.538527    2870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6sdqg" podUID="2c9efadb-89bb-4aa6-b0f0-ddcb3e931674"
	Aug 29 20:40:41 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:40:41.612747    2870 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 20:40:41 default-k8s-diff-port-145096 kubelet[2870]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 20:40:41 default-k8s-diff-port-145096 kubelet[2870]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 20:40:41 default-k8s-diff-port-145096 kubelet[2870]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 20:40:41 default-k8s-diff-port-145096 kubelet[2870]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 20:40:41 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:40:41.702053    2870 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964041701553893,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:40:41 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:40:41.702089    2870 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964041701553893,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:40:51 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:40:51.704408    2870 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964051704010079,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:40:51 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:40:51.704446    2870 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964051704010079,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:40:52 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:40:52.538832    2870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6sdqg" podUID="2c9efadb-89bb-4aa6-b0f0-ddcb3e931674"
	
	
	==> storage-provisioner [e74650e815d0ebb9e571fffeb67d5daf0eecc3b9277d002bf215d8c23e746ce1] <==
	I0829 20:31:48.306160       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 20:31:48.349146       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 20:31:48.349438       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 20:31:48.382051       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 20:31:48.382429       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-145096_d6e631d5-24cb-48c0-ba36-ad4244266dd5!
	I0829 20:31:48.383710       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"725bf0d4-3b04-47ea-a1d5-d42568638d45", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-145096_d6e631d5-24cb-48c0-ba36-ad4244266dd5 became leader
	I0829 20:31:48.483220       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-145096_d6e631d5-24cb-48c0-ba36-ad4244266dd5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-145096 -n default-k8s-diff-port-145096
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-145096 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-6sdqg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-145096 describe pod metrics-server-6867b74b74-6sdqg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-145096 describe pod metrics-server-6867b74b74-6sdqg: exit status 1 (62.794257ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-6sdqg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-145096 describe pod metrics-server-6867b74b74-6sdqg: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0829 20:33:45.974768   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-397724 -n no-preload-397724
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-29 20:41:23.417204696 +0000 UTC m=+6347.587665943
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-397724 -n no-preload-397724
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-397724 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-397724 logs -n 25: (2.104489937s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-397724                                   | no-preload-397724            | jenkins | v1.33.1 | 29 Aug 24 20:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-388383            | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:18 UTC | 29 Aug 24 20:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-388383                                  | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-695305             | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:19 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-695305                  | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-695305 --memory=2200 --alsologtostderr   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:20 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-695305 image list                           | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	| delete  | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	| start   | -p                                                     | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:21 UTC |
	|         | default-k8s-diff-port-145096                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-032002        | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-397724                  | no-preload-397724            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-397724                                   | no-preload-397724            | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC | 29 Aug 24 20:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-388383                 | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-388383                                  | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC | 29 Aug 24 20:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-145096  | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC | 29 Aug 24 20:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC |                     |
	|         | default-k8s-diff-port-145096                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-032002                              | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:22 UTC | 29 Aug 24 20:22 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-032002             | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:22 UTC | 29 Aug 24 20:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-032002                              | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:22 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-145096       | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:24 UTC | 29 Aug 24 20:31 UTC |
	|         | default-k8s-diff-port-145096                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 20:24:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 20:24:16.618808   68084 out.go:345] Setting OutFile to fd 1 ...
	I0829 20:24:16.619043   68084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:24:16.619051   68084 out.go:358] Setting ErrFile to fd 2...
	I0829 20:24:16.619055   68084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:24:16.619206   68084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 20:24:16.619741   68084 out.go:352] Setting JSON to false
	I0829 20:24:16.620649   68084 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7604,"bootTime":1724955453,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 20:24:16.620702   68084 start.go:139] virtualization: kvm guest
	I0829 20:24:16.622891   68084 out.go:177] * [default-k8s-diff-port-145096] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 20:24:16.624228   68084 out.go:177]   - MINIKUBE_LOCATION=19530
	I0829 20:24:16.624256   68084 notify.go:220] Checking for updates...
	I0829 20:24:16.627123   68084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 20:24:16.628611   68084 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:24:16.629858   68084 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 20:24:16.631013   68084 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 20:24:16.632116   68084 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 20:24:16.633630   68084 config.go:182] Loaded profile config "default-k8s-diff-port-145096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:24:16.634042   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:24:16.634080   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:24:16.648879   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36381
	I0829 20:24:16.649315   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:24:16.649875   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:24:16.649893   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:24:16.650274   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:24:16.650504   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:24:16.650776   68084 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 20:24:16.651053   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:24:16.651111   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:24:16.665964   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33615
	I0829 20:24:16.666402   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:24:16.666918   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:24:16.666937   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:24:16.667250   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:24:16.667435   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:24:16.698712   68084 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 20:24:16.700010   68084 start.go:297] selected driver: kvm2
	I0829 20:24:16.700023   68084 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-145096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:24:16.700131   68084 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 20:24:16.700915   68084 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 20:24:16.700998   68084 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19530-11185/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 20:24:16.715940   68084 install.go:137] /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0829 20:24:16.716321   68084 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:24:16.716388   68084 cni.go:84] Creating CNI manager for ""
	I0829 20:24:16.716405   68084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:24:16.716452   68084 start.go:340] cluster config:
	{Name:default-k8s-diff-port-145096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:24:16.716563   68084 iso.go:125] acquiring lock: {Name:mk1c9d3ac7f423dd4657884e37bdf4359f6328d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 20:24:16.718175   68084 out.go:177] * Starting "default-k8s-diff-port-145096" primary control-plane node in "default-k8s-diff-port-145096" cluster
	I0829 20:24:16.258820   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:16.719204   68084 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:24:16.719231   68084 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 20:24:16.719237   68084 cache.go:56] Caching tarball of preloaded images
	I0829 20:24:16.719296   68084 preload.go:172] Found /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 20:24:16.719305   68084 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 20:24:16.719385   68084 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/config.json ...
	I0829 20:24:16.719549   68084 start.go:360] acquireMachinesLock for default-k8s-diff-port-145096: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 20:24:22.338805   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:25.410778   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:31.490844   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:34.562885   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:40.642793   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:43.714939   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:49.794765   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:52.866858   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:58.946771   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:02.018832   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:08.098829   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:11.170833   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:17.250794   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:20.322926   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:26.402827   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:29.474844   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:35.554771   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:38.626850   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:41.630257   66989 start.go:364] duration metric: took 4m26.950412835s to acquireMachinesLock for "embed-certs-388383"
	I0829 20:25:41.630308   66989 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:25:41.630316   66989 fix.go:54] fixHost starting: 
	I0829 20:25:41.630791   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:25:41.630828   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:25:41.646005   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32873
	I0829 20:25:41.646405   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:25:41.646932   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:25:41.646959   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:25:41.647308   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:25:41.647525   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:25:41.647686   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:25:41.649457   66989 fix.go:112] recreateIfNeeded on embed-certs-388383: state=Stopped err=<nil>
	I0829 20:25:41.649491   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	W0829 20:25:41.649639   66989 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:25:41.651109   66989 out.go:177] * Restarting existing kvm2 VM for "embed-certs-388383" ...
	I0829 20:25:41.627651   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:25:41.627705   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:25:41.628067   66841 buildroot.go:166] provisioning hostname "no-preload-397724"
	I0829 20:25:41.628089   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:25:41.628259   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:25:41.630106   66841 machine.go:96] duration metric: took 4m35.46951337s to provisionDockerMachine
	I0829 20:25:41.630148   66841 fix.go:56] duration metric: took 4m35.494271139s for fixHost
	I0829 20:25:41.630159   66841 start.go:83] releasing machines lock for "no-preload-397724", held for 4m35.494325078s
	W0829 20:25:41.630182   66841 start.go:714] error starting host: provision: host is not running
	W0829 20:25:41.630284   66841 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0829 20:25:41.630295   66841 start.go:729] Will try again in 5 seconds ...
	I0829 20:25:41.652159   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Start
	I0829 20:25:41.652318   66989 main.go:141] libmachine: (embed-certs-388383) Ensuring networks are active...
	I0829 20:25:41.653011   66989 main.go:141] libmachine: (embed-certs-388383) Ensuring network default is active
	I0829 20:25:41.653426   66989 main.go:141] libmachine: (embed-certs-388383) Ensuring network mk-embed-certs-388383 is active
	I0829 20:25:41.653824   66989 main.go:141] libmachine: (embed-certs-388383) Getting domain xml...
	I0829 20:25:41.654765   66989 main.go:141] libmachine: (embed-certs-388383) Creating domain...
	I0829 20:25:42.860512   66989 main.go:141] libmachine: (embed-certs-388383) Waiting to get IP...
	I0829 20:25:42.861297   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:42.861661   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:42.861739   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:42.861649   68412 retry.go:31] will retry after 207.172422ms: waiting for machine to come up
	I0829 20:25:43.070026   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:43.070414   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:43.070445   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:43.070368   68412 retry.go:31] will retry after 336.815982ms: waiting for machine to come up
	I0829 20:25:43.408817   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:43.409144   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:43.409182   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:43.409117   68412 retry.go:31] will retry after 330.159156ms: waiting for machine to come up
	I0829 20:25:43.740518   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:43.741039   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:43.741065   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:43.741002   68412 retry.go:31] will retry after 528.906592ms: waiting for machine to come up
	I0829 20:25:44.271695   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:44.272286   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:44.272344   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:44.272280   68412 retry.go:31] will retry after 616.92568ms: waiting for machine to come up
	I0829 20:25:46.631383   66841 start.go:360] acquireMachinesLock for no-preload-397724: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 20:25:44.891133   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:44.891535   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:44.891566   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:44.891499   68412 retry.go:31] will retry after 907.330558ms: waiting for machine to come up
	I0829 20:25:45.800480   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:45.800858   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:45.800885   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:45.800840   68412 retry.go:31] will retry after 1.189775318s: waiting for machine to come up
	I0829 20:25:46.992687   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:46.993155   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:46.993189   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:46.993142   68412 retry.go:31] will retry after 1.467244635s: waiting for machine to come up
	I0829 20:25:48.462770   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:48.463201   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:48.463226   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:48.463173   68412 retry.go:31] will retry after 1.602764839s: waiting for machine to come up
	I0829 20:25:50.067082   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:50.067608   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:50.067638   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:50.067543   68412 retry.go:31] will retry after 1.562244323s: waiting for machine to come up
	I0829 20:25:51.632201   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:51.632705   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:51.632731   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:51.632650   68412 retry.go:31] will retry after 1.747220365s: waiting for machine to come up
	I0829 20:25:53.382010   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:53.382463   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:53.382527   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:53.382454   68412 retry.go:31] will retry after 3.446054845s: waiting for machine to come up
	I0829 20:25:56.830511   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:56.830954   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:56.830988   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:56.830908   68412 retry.go:31] will retry after 4.53995219s: waiting for machine to come up
	I0829 20:26:02.603329   67607 start.go:364] duration metric: took 3m23.680319578s to acquireMachinesLock for "old-k8s-version-032002"
	I0829 20:26:02.603393   67607 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:26:02.603404   67607 fix.go:54] fixHost starting: 
	I0829 20:26:02.603837   67607 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:02.603884   67607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:02.621398   67607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35977
	I0829 20:26:02.621840   67607 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:02.622425   67607 main.go:141] libmachine: Using API Version  1
	I0829 20:26:02.622460   67607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:02.622810   67607 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:02.623040   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:02.623201   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetState
	I0829 20:26:02.624854   67607 fix.go:112] recreateIfNeeded on old-k8s-version-032002: state=Stopped err=<nil>
	I0829 20:26:02.624880   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	W0829 20:26:02.625020   67607 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:26:02.627161   67607 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-032002" ...
	I0829 20:26:02.628419   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .Start
	I0829 20:26:02.628578   67607 main.go:141] libmachine: (old-k8s-version-032002) Ensuring networks are active...
	I0829 20:26:02.629339   67607 main.go:141] libmachine: (old-k8s-version-032002) Ensuring network default is active
	I0829 20:26:02.629732   67607 main.go:141] libmachine: (old-k8s-version-032002) Ensuring network mk-old-k8s-version-032002 is active
	I0829 20:26:02.630188   67607 main.go:141] libmachine: (old-k8s-version-032002) Getting domain xml...
	I0829 20:26:02.630924   67607 main.go:141] libmachine: (old-k8s-version-032002) Creating domain...
	I0829 20:26:01.375542   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.375928   66989 main.go:141] libmachine: (embed-certs-388383) Found IP for machine: 192.168.61.202
	I0829 20:26:01.375951   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has current primary IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.375974   66989 main.go:141] libmachine: (embed-certs-388383) Reserving static IP address...
	I0829 20:26:01.376364   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "embed-certs-388383", mac: "52:54:00:6c:5a:0c", ip: "192.168.61.202"} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.376398   66989 main.go:141] libmachine: (embed-certs-388383) DBG | skip adding static IP to network mk-embed-certs-388383 - found existing host DHCP lease matching {name: "embed-certs-388383", mac: "52:54:00:6c:5a:0c", ip: "192.168.61.202"}
	I0829 20:26:01.376411   66989 main.go:141] libmachine: (embed-certs-388383) Reserved static IP address: 192.168.61.202
	I0829 20:26:01.376428   66989 main.go:141] libmachine: (embed-certs-388383) Waiting for SSH to be available...
	I0829 20:26:01.376445   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Getting to WaitForSSH function...
	I0829 20:26:01.378600   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.378899   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.378937   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.379065   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Using SSH client type: external
	I0829 20:26:01.379088   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa (-rw-------)
	I0829 20:26:01.379118   66989 main.go:141] libmachine: (embed-certs-388383) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:01.379132   66989 main.go:141] libmachine: (embed-certs-388383) DBG | About to run SSH command:
	I0829 20:26:01.379141   66989 main.go:141] libmachine: (embed-certs-388383) DBG | exit 0
	I0829 20:26:01.498736   66989 main.go:141] libmachine: (embed-certs-388383) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:01.499103   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetConfigRaw
	I0829 20:26:01.499700   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetIP
	I0829 20:26:01.502022   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.502332   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.502362   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.502586   66989 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/config.json ...
	I0829 20:26:01.502778   66989 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:01.502795   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:01.502980   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.505156   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.505452   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.505473   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.505590   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:01.505739   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.505902   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.506038   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:01.506183   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:01.506366   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:01.506376   66989 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:01.602691   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:01.602721   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetMachineName
	I0829 20:26:01.603002   66989 buildroot.go:166] provisioning hostname "embed-certs-388383"
	I0829 20:26:01.603033   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetMachineName
	I0829 20:26:01.603232   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.605841   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.606170   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.606201   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.606333   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:01.606505   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.606672   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.606786   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:01.606950   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:01.607121   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:01.607144   66989 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-388383 && echo "embed-certs-388383" | sudo tee /etc/hostname
	I0829 20:26:01.717669   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-388383
	
	I0829 20:26:01.717709   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.720400   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.720705   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.720733   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.720863   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:01.721097   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.721280   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.721446   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:01.721585   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:01.721811   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:01.721842   66989 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-388383' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-388383/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-388383' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:01.827800   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:01.827835   66989 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:01.827869   66989 buildroot.go:174] setting up certificates
	I0829 20:26:01.827882   66989 provision.go:84] configureAuth start
	I0829 20:26:01.827894   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetMachineName
	I0829 20:26:01.828214   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetIP
	I0829 20:26:01.830619   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.831150   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.831184   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.831339   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.833642   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.833961   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.833987   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.834161   66989 provision.go:143] copyHostCerts
	I0829 20:26:01.834217   66989 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:01.834241   66989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:01.834322   66989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:01.834445   66989 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:01.834457   66989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:01.834491   66989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:01.834608   66989 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:01.834621   66989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:01.834660   66989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:01.834726   66989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.embed-certs-388383 san=[127.0.0.1 192.168.61.202 embed-certs-388383 localhost minikube]
	I0829 20:26:01.992735   66989 provision.go:177] copyRemoteCerts
	I0829 20:26:01.992794   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:01.992819   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.995463   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.995835   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.995862   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.996006   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:01.996179   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.996333   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:01.996460   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:02.077017   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:02.105498   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0829 20:26:02.133974   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 20:26:02.161330   66989 provision.go:87] duration metric: took 333.435119ms to configureAuth
	I0829 20:26:02.161362   66989 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:02.161579   66989 config.go:182] Loaded profile config "embed-certs-388383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:26:02.161707   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.164373   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.164696   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.164724   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.164909   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.165111   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.165276   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.165402   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.165535   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:02.165697   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:02.165711   66989 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:02.377994   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:02.378022   66989 machine.go:96] duration metric: took 875.231112ms to provisionDockerMachine
	I0829 20:26:02.378037   66989 start.go:293] postStartSetup for "embed-certs-388383" (driver="kvm2")
	I0829 20:26:02.378053   66989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:02.378078   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.378404   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:02.378432   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.380920   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.381329   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.381358   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.381564   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.381797   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.381975   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.382124   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:02.461053   66989 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:02.465391   66989 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:02.465417   66989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:02.465479   66989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:02.465550   66989 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:02.465635   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:02.474909   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:02.500025   66989 start.go:296] duration metric: took 121.973853ms for postStartSetup
	I0829 20:26:02.500064   66989 fix.go:56] duration metric: took 20.86974885s for fixHost
	I0829 20:26:02.500082   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.502976   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.503380   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.503411   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.503599   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.503808   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.503976   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.504126   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.504283   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:02.504459   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:02.504469   66989 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:02.603161   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963162.568310162
	
	I0829 20:26:02.603181   66989 fix.go:216] guest clock: 1724963162.568310162
	I0829 20:26:02.603187   66989 fix.go:229] Guest: 2024-08-29 20:26:02.568310162 +0000 UTC Remote: 2024-08-29 20:26:02.500067292 +0000 UTC m=+288.185978445 (delta=68.24287ms)
	I0829 20:26:02.603210   66989 fix.go:200] guest clock delta is within tolerance: 68.24287ms
	I0829 20:26:02.603216   66989 start.go:83] releasing machines lock for "embed-certs-388383", held for 20.972921408s
	I0829 20:26:02.603248   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.603532   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetIP
	I0829 20:26:02.606426   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.606804   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.606834   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.607021   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.607527   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.607694   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.607770   66989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:02.607809   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.607878   66989 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:02.607896   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.610239   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.610264   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.610657   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.610685   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.610723   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.610742   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.610844   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.611014   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.611014   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.611145   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.611208   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.611268   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.611341   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:02.611399   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:02.712435   66989 ssh_runner.go:195] Run: systemctl --version
	I0829 20:26:02.718614   66989 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:26:02.865138   66989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:26:02.871510   66989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:26:02.871593   66989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:26:02.887316   66989 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:26:02.887340   66989 start.go:495] detecting cgroup driver to use...
	I0829 20:26:02.887394   66989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:26:02.905024   66989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:26:02.918922   66989 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:26:02.918986   66989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:26:02.932660   66989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:26:02.946679   66989 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:26:03.056273   66989 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:26:03.216885   66989 docker.go:233] disabling docker service ...
	I0829 20:26:03.216959   66989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:26:03.231363   66989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:26:03.245609   66989 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:26:03.368087   66989 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:26:03.493947   66989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:26:03.508803   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:26:03.527542   66989 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 20:26:03.527607   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.538301   66989 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:26:03.538370   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.549672   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.562203   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.573572   66989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:26:03.585031   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.596778   66989 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.619405   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.630337   66989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:26:03.640492   66989 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:26:03.640568   66989 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:26:03.657931   66989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:26:03.673756   66989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:03.792856   66989 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:26:03.880493   66989 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:26:03.880551   66989 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:26:03.885793   66989 start.go:563] Will wait 60s for crictl version
	I0829 20:26:03.885850   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:26:03.889835   66989 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:26:03.928633   66989 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:26:03.928702   66989 ssh_runner.go:195] Run: crio --version
	I0829 20:26:03.958861   66989 ssh_runner.go:195] Run: crio --version
	I0829 20:26:03.987724   66989 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 20:26:03.989009   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetIP
	I0829 20:26:03.991889   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:03.992308   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:03.992334   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:03.992567   66989 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0829 20:26:03.996945   66989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:04.009353   66989 kubeadm.go:883] updating cluster {Name:embed-certs-388383 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-388383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:26:04.009462   66989 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:26:04.009501   66989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:04.051583   66989 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 20:26:04.051643   66989 ssh_runner.go:195] Run: which lz4
	I0829 20:26:04.055929   66989 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 20:26:04.060214   66989 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 20:26:04.060240   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 20:26:03.867691   67607 main.go:141] libmachine: (old-k8s-version-032002) Waiting to get IP...
	I0829 20:26:03.868798   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:03.869246   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:03.869318   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:03.869235   68552 retry.go:31] will retry after 220.928648ms: waiting for machine to come up
	I0829 20:26:04.091675   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:04.092057   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:04.092084   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:04.092020   68552 retry.go:31] will retry after 352.781755ms: waiting for machine to come up
	I0829 20:26:04.446766   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:04.447277   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:04.447301   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:04.447224   68552 retry.go:31] will retry after 480.96031ms: waiting for machine to come up
	I0829 20:26:04.929561   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:04.930149   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:04.930181   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:04.930051   68552 retry.go:31] will retry after 415.057247ms: waiting for machine to come up
	I0829 20:26:05.346757   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:05.347224   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:05.347258   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:05.347196   68552 retry.go:31] will retry after 609.958508ms: waiting for machine to come up
	I0829 20:26:05.959227   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:05.959774   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:05.959825   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:05.959702   68552 retry.go:31] will retry after 680.801337ms: waiting for machine to come up
	I0829 20:26:06.642811   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:06.643312   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:06.643343   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:06.643269   68552 retry.go:31] will retry after 995.561322ms: waiting for machine to come up
	I0829 20:26:07.640147   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:07.640617   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:07.640652   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:07.640588   68552 retry.go:31] will retry after 1.22436043s: waiting for machine to come up
	I0829 20:26:05.472272   66989 crio.go:462] duration metric: took 1.416373513s to copy over tarball
	I0829 20:26:05.472355   66989 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 20:26:07.583560   66989 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.111164398s)
	I0829 20:26:07.583595   66989 crio.go:469] duration metric: took 2.111297179s to extract the tarball
	I0829 20:26:07.583605   66989 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 20:26:07.622447   66989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:07.671704   66989 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 20:26:07.671732   66989 cache_images.go:84] Images are preloaded, skipping loading
	I0829 20:26:07.671742   66989 kubeadm.go:934] updating node { 192.168.61.202 8443 v1.31.0 crio true true} ...
	I0829 20:26:07.671869   66989 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-388383 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-388383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:26:07.671958   66989 ssh_runner.go:195] Run: crio config
	I0829 20:26:07.717217   66989 cni.go:84] Creating CNI manager for ""
	I0829 20:26:07.717242   66989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:07.717263   66989 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:26:07.717290   66989 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.202 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-388383 NodeName:embed-certs-388383 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 20:26:07.717465   66989 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-388383"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:26:07.717549   66989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 20:26:07.727174   66989 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:26:07.727258   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:26:07.736512   66989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0829 20:26:07.752727   66989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:26:07.772430   66989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0829 20:26:07.793343   66989 ssh_runner.go:195] Run: grep 192.168.61.202	control-plane.minikube.internal$ /etc/hosts
	I0829 20:26:07.798214   66989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:07.811285   66989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:07.927025   66989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:07.943741   66989 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383 for IP: 192.168.61.202
	I0829 20:26:07.943765   66989 certs.go:194] generating shared ca certs ...
	I0829 20:26:07.943784   66989 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:07.943984   66989 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:26:07.944047   66989 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:26:07.944061   66989 certs.go:256] generating profile certs ...
	I0829 20:26:07.944177   66989 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/client.key
	I0829 20:26:07.944254   66989 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/apiserver.key.03b29390
	I0829 20:26:07.944317   66989 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/proxy-client.key
	I0829 20:26:07.944494   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:26:07.944538   66989 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:26:07.944551   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:26:07.944581   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:26:07.944605   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:26:07.944628   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:26:07.944670   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:07.945252   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:26:07.971277   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:26:08.012892   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:26:08.042038   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:26:08.067708   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0829 20:26:08.095930   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 20:26:08.127171   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:26:08.151287   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 20:26:08.175525   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:26:08.199076   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:26:08.222783   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:26:08.245783   66989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:26:08.261839   66989 ssh_runner.go:195] Run: openssl version
	I0829 20:26:08.267545   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:26:08.278347   66989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:26:08.284232   66989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:26:08.284283   66989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:26:08.292024   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:26:08.306831   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:26:08.320607   66989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:26:08.325027   66989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:26:08.325070   66989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:26:08.330808   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:26:08.341457   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:26:08.352323   66989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:08.356822   66989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:08.356891   66989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:08.362617   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:26:08.373755   66989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:26:08.378153   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:26:08.384225   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:26:08.390136   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:26:08.396002   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:26:08.401713   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:26:08.407437   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:26:08.413033   66989 kubeadm.go:392] StartCluster: {Name:embed-certs-388383 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-388383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:26:08.413119   66989 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:26:08.413173   66989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:08.450685   66989 cri.go:89] found id: ""
	I0829 20:26:08.450757   66989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:26:08.460787   66989 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:26:08.460809   66989 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:26:08.460853   66989 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:26:08.470179   66989 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:26:08.471673   66989 kubeconfig.go:125] found "embed-certs-388383" server: "https://192.168.61.202:8443"
	I0829 20:26:08.474839   66989 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:26:08.483951   66989 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.202
	I0829 20:26:08.483992   66989 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:26:08.484007   66989 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:26:08.484085   66989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:08.525947   66989 cri.go:89] found id: ""
	I0829 20:26:08.526013   66989 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:26:08.541862   66989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:26:08.551179   66989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:26:08.551200   66989 kubeadm.go:157] found existing configuration files:
	
	I0829 20:26:08.551249   66989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:26:08.559897   66989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:26:08.559970   66989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:26:08.569317   66989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:26:08.577858   66989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:26:08.577905   66989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:26:08.587113   66989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:26:08.595645   66989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:26:08.595705   66989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:26:08.604803   66989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:26:08.613070   66989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:26:08.613125   66989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:26:08.622037   66989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:26:08.631330   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:08.742682   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:08.866518   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:08.866954   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:08.866985   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:08.866896   68552 retry.go:31] will retry after 1.707701085s: waiting for machine to come up
	I0829 20:26:10.576676   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:10.577094   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:10.577124   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:10.577047   68552 retry.go:31] will retry after 1.496799212s: waiting for machine to come up
	I0829 20:26:12.075964   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:12.076412   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:12.076451   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:12.076377   68552 retry.go:31] will retry after 2.246779697s: waiting for machine to come up
	I0829 20:26:09.809078   66989 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.066360218s)
	I0829 20:26:09.809118   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:10.027517   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:10.095959   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:10.199656   66989 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:26:10.199745   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:10.700569   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:11.200798   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:11.700664   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:12.200052   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:12.700839   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:12.715319   66989 api_server.go:72] duration metric: took 2.515661322s to wait for apiserver process to appear ...
	I0829 20:26:12.715351   66989 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:26:12.715374   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:15.687527   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:26:15.687558   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:26:15.687572   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:15.716339   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:26:15.716365   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:26:15.716378   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:15.750700   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:15.750732   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:16.216255   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:16.224376   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:16.224401   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:16.715457   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:16.723983   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:16.724004   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:17.215562   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:17.219605   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0829 20:26:17.225473   66989 api_server.go:141] control plane version: v1.31.0
	I0829 20:26:17.225496   66989 api_server.go:131] duration metric: took 4.510137186s to wait for apiserver health ...
	I0829 20:26:17.225504   66989 cni.go:84] Creating CNI manager for ""
	I0829 20:26:17.225509   66989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:17.227379   66989 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:26:14.324452   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:14.324770   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:14.324808   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:14.324748   68552 retry.go:31] will retry after 3.172592587s: waiting for machine to come up
	I0829 20:26:17.500203   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:17.500540   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:17.500573   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:17.500485   68552 retry.go:31] will retry after 2.81386002s: waiting for machine to come up
	I0829 20:26:17.228505   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:26:17.238762   66989 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:26:17.264380   66989 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:26:17.274981   66989 system_pods.go:59] 8 kube-system pods found
	I0829 20:26:17.275009   66989 system_pods.go:61] "coredns-6f6b679f8f-dg6t6" [92e89b20-ebf4-4738-8ca7-9dc2a0e5653a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:26:17.275016   66989 system_pods.go:61] "etcd-embed-certs-388383" [a688325a-9ed2-488d-a1a1-aa440e37fa9f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 20:26:17.275023   66989 system_pods.go:61] "kube-apiserver-embed-certs-388383" [7a1b715b-87a3-44e0-868d-a3184f5b9f61] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 20:26:17.275028   66989 system_pods.go:61] "kube-controller-manager-embed-certs-388383" [9d942083-4d39-448c-8151-424ea9d5e6af] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 20:26:17.275033   66989 system_pods.go:61] "kube-proxy-fcxs4" [649b40c8-4f4b-40d1-8179-baf378d4c7d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 20:26:17.275038   66989 system_pods.go:61] "kube-scheduler-embed-certs-388383" [87b73013-dfad-411d-aaa9-f2c0e39fb920] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 20:26:17.275043   66989 system_pods.go:61] "metrics-server-6867b74b74-mx5jh" [99e21acd-b7b8-4e6f-8c75-c112206aed89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:26:17.275048   66989 system_pods.go:61] "storage-provisioner" [021ca156-b7a8-4647-8efe-db17968fd5a8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 20:26:17.275056   66989 system_pods.go:74] duration metric: took 10.656426ms to wait for pod list to return data ...
	I0829 20:26:17.275074   66989 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:26:17.279480   66989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:26:17.279504   66989 node_conditions.go:123] node cpu capacity is 2
	I0829 20:26:17.279519   66989 node_conditions.go:105] duration metric: took 4.439469ms to run NodePressure ...
	I0829 20:26:17.279537   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:17.561282   66989 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 20:26:17.565287   66989 kubeadm.go:739] kubelet initialised
	I0829 20:26:17.565307   66989 kubeadm.go:740] duration metric: took 4.002605ms waiting for restarted kubelet to initialise ...
	I0829 20:26:17.565314   66989 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:26:17.570104   66989 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:17.576425   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.576454   66989 pod_ready.go:82] duration metric: took 6.324083ms for pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:17.576464   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.576474   66989 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:17.582501   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "etcd-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.582523   66989 pod_ready.go:82] duration metric: took 6.040325ms for pod "etcd-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:17.582547   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "etcd-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.582556   66989 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:17.588534   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.588554   66989 pod_ready.go:82] duration metric: took 5.988678ms for pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:17.588562   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.588568   66989 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:17.668334   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.668365   66989 pod_ready.go:82] duration metric: took 79.787211ms for pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:17.668378   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.668386   66989 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fcxs4" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:18.068248   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "kube-proxy-fcxs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.068286   66989 pod_ready.go:82] duration metric: took 399.880238ms for pod "kube-proxy-fcxs4" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:18.068299   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "kube-proxy-fcxs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.068308   66989 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:18.468096   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.468126   66989 pod_ready.go:82] duration metric: took 399.810823ms for pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:18.468134   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.468141   66989 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:18.868444   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.868478   66989 pod_ready.go:82] duration metric: took 400.329102ms for pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:18.868490   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.868499   66989 pod_ready.go:39] duration metric: took 1.303176044s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:26:18.868519   66989 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 20:26:18.880892   66989 ops.go:34] apiserver oom_adj: -16
	I0829 20:26:18.880916   66989 kubeadm.go:597] duration metric: took 10.42010114s to restartPrimaryControlPlane
	I0829 20:26:18.880925   66989 kubeadm.go:394] duration metric: took 10.467899141s to StartCluster
	I0829 20:26:18.880946   66989 settings.go:142] acquiring lock: {Name:mka4cd5ddff5796cd0ca11509c181178f4f73529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:18.881032   66989 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:26:18.884130   66989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:18.884619   66989 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 20:26:18.884674   66989 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 20:26:18.884749   66989 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-388383"
	I0829 20:26:18.884765   66989 addons.go:69] Setting default-storageclass=true in profile "embed-certs-388383"
	I0829 20:26:18.884783   66989 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-388383"
	W0829 20:26:18.884792   66989 addons.go:243] addon storage-provisioner should already be in state true
	I0829 20:26:18.884804   66989 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-388383"
	I0829 20:26:18.884816   66989 addons.go:69] Setting metrics-server=true in profile "embed-certs-388383"
	I0829 20:26:18.884828   66989 host.go:66] Checking if "embed-certs-388383" exists ...
	I0829 20:26:18.884856   66989 addons.go:234] Setting addon metrics-server=true in "embed-certs-388383"
	W0829 20:26:18.884877   66989 addons.go:243] addon metrics-server should already be in state true
	I0829 20:26:18.884884   66989 config.go:182] Loaded profile config "embed-certs-388383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:26:18.884912   66989 host.go:66] Checking if "embed-certs-388383" exists ...
	I0829 20:26:18.885134   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.885176   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.885216   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.885249   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.885291   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.885338   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.886484   66989 out.go:177] * Verifying Kubernetes components...
	I0829 20:26:18.887938   66989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:18.900910   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33641
	I0829 20:26:18.901377   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.901917   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.901938   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.902300   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.903062   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.903110   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.903810   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I0829 20:26:18.903824   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38101
	I0829 20:26:18.904282   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.904303   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.904673   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.904691   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.904829   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.904845   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.905017   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.905428   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.905462   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.905664   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.905860   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:26:18.909388   66989 addons.go:234] Setting addon default-storageclass=true in "embed-certs-388383"
	W0829 20:26:18.909408   66989 addons.go:243] addon default-storageclass should already be in state true
	I0829 20:26:18.909437   66989 host.go:66] Checking if "embed-certs-388383" exists ...
	I0829 20:26:18.909793   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.909839   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.921180   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35467
	I0829 20:26:18.921597   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.922074   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.922087   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.922470   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.922697   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:26:18.922725   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39123
	I0829 20:26:18.923052   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.923592   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.923610   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.923919   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.924057   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:26:18.924063   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45681
	I0829 20:26:18.924461   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.924519   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:18.924984   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.925002   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.925632   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.925682   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:18.926152   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.926194   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.926494   66989 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:18.927266   66989 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 20:26:18.928130   66989 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:26:18.928141   66989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 20:26:18.928155   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:18.928843   66989 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 20:26:18.928863   66989 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 20:26:18.928888   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:18.931716   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.932273   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:18.932296   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.932424   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.932456   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:18.932644   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:18.932810   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:18.932869   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:18.932891   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.933050   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:18.933100   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:18.933271   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:18.933426   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:18.933598   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:18.942718   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38109
	I0829 20:26:18.943150   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.943532   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.943553   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.943908   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.944027   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:26:18.945304   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:18.945498   66989 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 20:26:18.945510   66989 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 20:26:18.945522   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:18.948108   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.948469   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:18.948494   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.948730   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:18.948889   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:18.949085   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:18.949222   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:19.111953   66989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:19.131195   66989 node_ready.go:35] waiting up to 6m0s for node "embed-certs-388383" to be "Ready" ...
	I0829 20:26:19.246857   66989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:26:19.269511   66989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 20:26:19.269670   66989 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 20:26:19.269691   66989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 20:26:19.346200   66989 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 20:26:19.346234   66989 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 20:26:19.374530   66989 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:26:19.374566   66989 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 20:26:19.418474   66989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:26:20.495022   66989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.225476769s)
	I0829 20:26:20.495077   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.495090   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.495185   66989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.248286753s)
	I0829 20:26:20.495232   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.495249   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.495572   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.495600   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.495611   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.495619   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.495634   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.495663   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Closing plugin on server side
	I0829 20:26:20.495664   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.495678   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.495688   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.496014   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.496029   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.496061   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Closing plugin on server side
	I0829 20:26:20.496097   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.496111   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.504149   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.504182   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.504419   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.504436   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.519341   66989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.100829284s)
	I0829 20:26:20.519396   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.519422   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.519670   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Closing plugin on server side
	I0829 20:26:20.519716   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.519734   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.519746   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.519755   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.520040   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.520055   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.520072   66989 addons.go:475] Verifying addon metrics-server=true in "embed-certs-388383"
	I0829 20:26:20.523102   66989 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0829 20:26:21.515365   68084 start.go:364] duration metric: took 2m4.795762476s to acquireMachinesLock for "default-k8s-diff-port-145096"
	I0829 20:26:21.515428   68084 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:26:21.515439   68084 fix.go:54] fixHost starting: 
	I0829 20:26:21.515864   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:21.515904   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:21.535441   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33171
	I0829 20:26:21.535886   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:21.536390   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:26:21.536414   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:21.536819   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:21.537035   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:21.537203   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:26:21.538735   68084 fix.go:112] recreateIfNeeded on default-k8s-diff-port-145096: state=Stopped err=<nil>
	I0829 20:26:21.538762   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	W0829 20:26:21.538901   68084 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:26:21.540852   68084 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-145096" ...
	I0829 20:26:21.542258   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Start
	I0829 20:26:21.542429   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Ensuring networks are active...
	I0829 20:26:21.543181   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Ensuring network default is active
	I0829 20:26:21.543522   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Ensuring network mk-default-k8s-diff-port-145096 is active
	I0829 20:26:21.543872   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Getting domain xml...
	I0829 20:26:21.544627   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Creating domain...
	I0829 20:26:20.317138   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.317672   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has current primary IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.317700   67607 main.go:141] libmachine: (old-k8s-version-032002) Found IP for machine: 192.168.39.116
	I0829 20:26:20.317716   67607 main.go:141] libmachine: (old-k8s-version-032002) Reserving static IP address...
	I0829 20:26:20.318143   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "old-k8s-version-032002", mac: "52:54:00:a8:ca:96", ip: "192.168.39.116"} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.318169   67607 main.go:141] libmachine: (old-k8s-version-032002) Reserved static IP address: 192.168.39.116
	I0829 20:26:20.318189   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | skip adding static IP to network mk-old-k8s-version-032002 - found existing host DHCP lease matching {name: "old-k8s-version-032002", mac: "52:54:00:a8:ca:96", ip: "192.168.39.116"}
	I0829 20:26:20.318208   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | Getting to WaitForSSH function...
	I0829 20:26:20.318217   67607 main.go:141] libmachine: (old-k8s-version-032002) Waiting for SSH to be available...
	I0829 20:26:20.320598   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.320961   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.320989   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.321082   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | Using SSH client type: external
	I0829 20:26:20.321121   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa (-rw-------)
	I0829 20:26:20.321156   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:20.321171   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | About to run SSH command:
	I0829 20:26:20.321185   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | exit 0
	I0829 20:26:20.446805   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:20.447204   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetConfigRaw
	I0829 20:26:20.447944   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:20.450726   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.451120   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.451160   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.451464   67607 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/config.json ...
	I0829 20:26:20.451670   67607 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:20.451690   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:20.451886   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.454120   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.454496   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.454566   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.454648   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.454808   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.454975   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.455123   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.455282   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:20.455520   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:20.455533   67607 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:20.555074   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:20.555100   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetMachineName
	I0829 20:26:20.555331   67607 buildroot.go:166] provisioning hostname "old-k8s-version-032002"
	I0829 20:26:20.555353   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetMachineName
	I0829 20:26:20.555540   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.558576   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.559058   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.559086   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.559273   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.559490   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.559661   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.559834   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.560026   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:20.560189   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:20.560201   67607 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-032002 && echo "old-k8s-version-032002" | sudo tee /etc/hostname
	I0829 20:26:20.675352   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-032002
	
	I0829 20:26:20.675400   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.678472   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.678908   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.678944   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.679139   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.679341   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.679533   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.679710   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.679884   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:20.680090   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:20.680108   67607 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-032002' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-032002/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-032002' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:20.789673   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:20.789713   67607 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:20.789744   67607 buildroot.go:174] setting up certificates
	I0829 20:26:20.789753   67607 provision.go:84] configureAuth start
	I0829 20:26:20.789761   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetMachineName
	I0829 20:26:20.790067   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:20.792822   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.793152   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.793173   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.793338   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.795624   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.795948   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.795974   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.796080   67607 provision.go:143] copyHostCerts
	I0829 20:26:20.796148   67607 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:20.796168   67607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:20.796236   67607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:20.796344   67607 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:20.796355   67607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:20.796387   67607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:20.796467   67607 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:20.796476   67607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:20.796503   67607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:20.796573   67607 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-032002 san=[127.0.0.1 192.168.39.116 localhost minikube old-k8s-version-032002]
	I0829 20:26:20.906382   67607 provision.go:177] copyRemoteCerts
	I0829 20:26:20.906436   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:20.906466   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.909180   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.909488   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.909519   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.909666   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.909831   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.909963   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.910062   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:20.989017   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:21.018571   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0829 20:26:21.043015   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 20:26:21.067288   67607 provision.go:87] duration metric: took 277.522292ms to configureAuth
	I0829 20:26:21.067322   67607 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:21.067527   67607 config.go:182] Loaded profile config "old-k8s-version-032002": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0829 20:26:21.067607   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.070264   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.070642   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.070679   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.070881   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.071088   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.071288   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.071465   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.071661   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:21.071886   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:21.071923   67607 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:21.290979   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:21.291003   67607 machine.go:96] duration metric: took 839.319831ms to provisionDockerMachine
	I0829 20:26:21.291014   67607 start.go:293] postStartSetup for "old-k8s-version-032002" (driver="kvm2")
	I0829 20:26:21.291026   67607 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:21.291046   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.291342   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:21.291366   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.293946   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.294245   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.294273   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.294464   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.294686   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.294840   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.294964   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:21.373592   67607 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:21.377797   67607 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:21.377826   67607 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:21.377892   67607 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:21.377966   67607 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:21.378054   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:21.387886   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:21.413456   67607 start.go:296] duration metric: took 122.429334ms for postStartSetup
	I0829 20:26:21.413497   67607 fix.go:56] duration metric: took 18.810093949s for fixHost
	I0829 20:26:21.413522   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.416095   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.416391   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.416418   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.416594   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.416803   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.416970   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.417115   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.417272   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:21.417474   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:21.417489   67607 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:21.515167   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963181.486447470
	
	I0829 20:26:21.515190   67607 fix.go:216] guest clock: 1724963181.486447470
	I0829 20:26:21.515200   67607 fix.go:229] Guest: 2024-08-29 20:26:21.48644747 +0000 UTC Remote: 2024-08-29 20:26:21.413502498 +0000 UTC m=+222.629982255 (delta=72.944972ms)
	I0829 20:26:21.515225   67607 fix.go:200] guest clock delta is within tolerance: 72.944972ms
	I0829 20:26:21.515232   67607 start.go:83] releasing machines lock for "old-k8s-version-032002", held for 18.911866017s
	I0829 20:26:21.515278   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.515596   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:21.518247   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.518682   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.518710   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.518835   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.519413   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.519589   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.519680   67607 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:21.519736   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.519843   67607 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:21.519869   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.522261   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.522561   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.522614   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.522643   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.522763   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.522919   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.523044   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.523071   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.523073   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.523241   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.523240   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:21.523413   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.523560   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.523712   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:21.599524   67607 ssh_runner.go:195] Run: systemctl --version
	I0829 20:26:21.629122   67607 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:26:21.778437   67607 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:26:21.784642   67607 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:26:21.784714   67607 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:26:21.802019   67607 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:26:21.802043   67607 start.go:495] detecting cgroup driver to use...
	I0829 20:26:21.802100   67607 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:26:21.817407   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:26:21.831514   67607 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:26:21.831578   67607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:26:21.845224   67607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:26:21.858522   67607 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:26:21.972769   67607 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:26:22.115154   67607 docker.go:233] disabling docker service ...
	I0829 20:26:22.115240   67607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:26:22.130015   67607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:26:22.143186   67607 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:26:22.294113   67607 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:26:22.432373   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:26:22.446427   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:26:22.465151   67607 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0829 20:26:22.465218   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.476104   67607 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:26:22.476177   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.486627   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.497782   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.509869   67607 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:26:22.521347   67607 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:26:22.531406   67607 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:26:22.531455   67607 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:26:22.544949   67607 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:26:22.554918   67607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:22.687909   67607 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:26:22.808522   67607 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:26:22.808595   67607 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:26:22.814348   67607 start.go:563] Will wait 60s for crictl version
	I0829 20:26:22.814411   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:22.818348   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:26:22.863797   67607 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:26:22.863883   67607 ssh_runner.go:195] Run: crio --version
	I0829 20:26:22.893173   67607 ssh_runner.go:195] Run: crio --version
	I0829 20:26:22.923146   67607 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0829 20:26:22.924299   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:22.927222   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:22.927564   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:22.927589   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:22.927772   67607 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 20:26:22.932100   67607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:22.945139   67607 kubeadm.go:883] updating cluster {Name:old-k8s-version-032002 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:26:22.945274   67607 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 20:26:22.945334   67607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:22.990592   67607 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 20:26:22.990668   67607 ssh_runner.go:195] Run: which lz4
	I0829 20:26:22.995104   67607 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 20:26:22.999667   67607 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 20:26:22.999703   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0829 20:26:20.524280   66989 addons.go:510] duration metric: took 1.639608208s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0829 20:26:21.135090   66989 node_ready.go:53] node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:23.136839   66989 node_ready.go:53] node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:22.825998   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting to get IP...
	I0829 20:26:22.827278   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:22.827766   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:22.827883   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:22.827750   68757 retry.go:31] will retry after 212.207753ms: waiting for machine to come up
	I0829 20:26:23.041113   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.041553   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.041588   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:23.041508   68757 retry.go:31] will retry after 291.9464ms: waiting for machine to come up
	I0829 20:26:23.335081   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.336072   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.336121   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:23.336041   68757 retry.go:31] will retry after 478.578755ms: waiting for machine to come up
	I0829 20:26:23.816669   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.817178   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.817233   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:23.817087   68757 retry.go:31] will retry after 501.093836ms: waiting for machine to come up
	I0829 20:26:24.319836   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:24.320392   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:24.320418   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:24.320343   68757 retry.go:31] will retry after 524.430407ms: waiting for machine to come up
	I0829 20:26:24.846908   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:24.847388   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:24.847418   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:24.847361   68757 retry.go:31] will retry after 701.573237ms: waiting for machine to come up
	I0829 20:26:25.550328   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:25.550786   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:25.550811   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:25.550727   68757 retry.go:31] will retry after 916.084079ms: waiting for machine to come up
	I0829 20:26:26.468529   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:26.468981   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:26.469012   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:26.468921   68757 retry.go:31] will retry after 1.216322833s: waiting for machine to come up
	I0829 20:26:24.727216   67607 crio.go:462] duration metric: took 1.732148589s to copy over tarball
	I0829 20:26:24.727294   67607 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 20:26:27.715640   67607 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.988318238s)
	I0829 20:26:27.715664   67607 crio.go:469] duration metric: took 2.988419957s to extract the tarball
	I0829 20:26:27.715672   67607 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 20:26:27.764192   67607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:27.797388   67607 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 20:26:27.797422   67607 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 20:26:27.797501   67607 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:27.797536   67607 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0829 20:26:27.797549   67607 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:27.797557   67607 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0829 20:26:27.797511   67607 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:27.797629   67607 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:27.797637   67607 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:27.797519   67607 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:27.799128   67607 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:27.799208   67607 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0829 20:26:27.799251   67607 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0829 20:26:27.799361   67607 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:27.799386   67607 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:27.799463   67607 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:27.799697   67607 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:27.799830   67607 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:27.978022   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:27.978296   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:27.981616   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:27.998987   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.001078   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.004185   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.004672   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0829 20:26:28.103885   67607 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0829 20:26:28.103953   67607 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.104013   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.122203   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:28.129983   67607 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0829 20:26:28.130028   67607 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.130076   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.165427   67607 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0829 20:26:28.165470   67607 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.165521   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.199971   67607 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0829 20:26:28.199990   67607 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0829 20:26:28.200015   67607 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.200021   67607 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.200062   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.200105   67607 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0829 20:26:28.200155   67607 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.200199   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.200204   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.200062   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.200113   67607 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0829 20:26:28.200325   67607 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0829 20:26:28.200356   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.329091   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.329139   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.329187   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.329260   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.329316   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.329362   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:26:28.329316   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.484805   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.484857   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.484888   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.484943   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:26:28.484963   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.485009   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.487351   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.615121   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.615187   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.645371   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.645433   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:26:28.645524   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.645573   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.645638   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0829 20:26:28.729141   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0829 20:26:28.762530   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0829 20:26:28.762592   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0829 20:26:28.782117   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0829 20:26:28.782155   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0829 20:26:28.782195   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0829 20:26:28.782229   67607 cache_images.go:92] duration metric: took 984.791099ms to LoadCachedImages
	W0829 20:26:28.782293   67607 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0829 20:26:28.782310   67607 kubeadm.go:934] updating node { 192.168.39.116 8443 v1.20.0 crio true true} ...
	I0829 20:26:28.782452   67607 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-032002 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:26:28.782518   67607 ssh_runner.go:195] Run: crio config
	I0829 20:26:25.635616   66989 node_ready.go:53] node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:26.635463   66989 node_ready.go:49] node "embed-certs-388383" has status "Ready":"True"
	I0829 20:26:26.635488   66989 node_ready.go:38] duration metric: took 7.504259002s for node "embed-certs-388383" to be "Ready" ...
	I0829 20:26:26.635497   66989 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:26:26.641316   66989 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:26.649602   66989 pod_ready.go:93] pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:26.649634   66989 pod_ready.go:82] duration metric: took 8.284428ms for pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:26.649656   66989 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:28.658281   66989 pod_ready.go:103] pod "etcd-embed-certs-388383" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:27.686642   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:27.687071   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:27.687097   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:27.687030   68757 retry.go:31] will retry after 1.410599528s: waiting for machine to come up
	I0829 20:26:29.099622   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:29.100175   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:29.100207   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:29.100083   68757 retry.go:31] will retry after 1.929618787s: waiting for machine to come up
	I0829 20:26:31.031864   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:31.032434   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:31.032467   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:31.032367   68757 retry.go:31] will retry after 1.926271655s: waiting for machine to come up
	I0829 20:26:28.832785   67607 cni.go:84] Creating CNI manager for ""
	I0829 20:26:28.832807   67607 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:28.832824   67607 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:26:28.832843   67607 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.116 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-032002 NodeName:old-k8s-version-032002 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0829 20:26:28.832982   67607 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-032002"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:26:28.833059   67607 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0829 20:26:28.843483   67607 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:26:28.843566   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:26:28.853276   67607 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0829 20:26:28.870579   67607 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:26:28.888053   67607 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0829 20:26:28.905988   67607 ssh_runner.go:195] Run: grep 192.168.39.116	control-plane.minikube.internal$ /etc/hosts
	I0829 20:26:28.910048   67607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:28.924996   67607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:29.075015   67607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:29.095381   67607 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002 for IP: 192.168.39.116
	I0829 20:26:29.095411   67607 certs.go:194] generating shared ca certs ...
	I0829 20:26:29.095430   67607 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:29.095605   67607 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:26:29.095686   67607 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:26:29.095706   67607 certs.go:256] generating profile certs ...
	I0829 20:26:29.095847   67607 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.key
	I0829 20:26:29.095928   67607 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.key.a1a2aebb
	I0829 20:26:29.095984   67607 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.key
	I0829 20:26:29.096135   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:26:29.096184   67607 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:26:29.096198   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:26:29.096227   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:26:29.096259   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:26:29.096299   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:26:29.096378   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:29.097276   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:26:29.144259   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:26:29.171420   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:26:29.198554   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:26:29.230750   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0829 20:26:29.269978   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 20:26:29.299839   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:26:29.333742   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 20:26:29.358352   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:26:29.382648   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:26:29.406773   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:26:29.434106   67607 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:26:29.451913   67607 ssh_runner.go:195] Run: openssl version
	I0829 20:26:29.457722   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:26:29.469147   67607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:26:29.474048   67607 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:26:29.474094   67607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:26:29.480082   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:26:29.491083   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:26:29.501994   67607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:29.508594   67607 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:29.508643   67607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:29.516331   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:26:29.531067   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:26:29.543998   67607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:26:29.548781   67607 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:26:29.548845   67607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:26:29.555052   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:26:29.567902   67607 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:26:29.572879   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:26:29.579506   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:26:29.585887   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:26:29.592262   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:26:29.598566   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:26:29.604672   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:26:29.610830   67607 kubeadm.go:392] StartCluster: {Name:old-k8s-version-032002 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:26:29.612915   67607 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:26:29.613015   67607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:29.655224   67607 cri.go:89] found id: ""
	I0829 20:26:29.655314   67607 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:26:29.666216   67607 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:26:29.666241   67607 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:26:29.666292   67607 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:26:29.676908   67607 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:26:29.678276   67607 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-032002" does not appear in /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:26:29.679313   67607 kubeconfig.go:62] /home/jenkins/minikube-integration/19530-11185/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-032002" cluster setting kubeconfig missing "old-k8s-version-032002" context setting]
	I0829 20:26:29.680756   67607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:29.764872   67607 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:26:29.776873   67607 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.116
	I0829 20:26:29.776914   67607 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:26:29.776926   67607 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:26:29.776987   67607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:29.819268   67607 cri.go:89] found id: ""
	I0829 20:26:29.819347   67607 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:26:29.840386   67607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:26:29.851624   67607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:26:29.851650   67607 kubeadm.go:157] found existing configuration files:
	
	I0829 20:26:29.851710   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:26:29.861439   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:26:29.861504   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:26:29.871594   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:26:29.881126   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:26:29.881199   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:26:29.890984   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:26:29.900838   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:26:29.900913   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:26:29.910677   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:26:29.920008   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:26:29.920073   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:26:29.929631   67607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:26:29.939864   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:30.096029   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:30.816696   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:31.043310   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:31.139291   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:31.248095   67607 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:26:31.248190   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:31.749101   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:32.248718   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:32.748783   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:33.248254   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:33.748557   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:30.180025   66989 pod_ready.go:93] pod "etcd-embed-certs-388383" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:30.180056   66989 pod_ready.go:82] duration metric: took 3.530390258s for pod "etcd-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:30.180069   66989 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.187272   66989 pod_ready.go:93] pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:32.187300   66989 pod_ready.go:82] duration metric: took 2.007222016s for pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.187313   66989 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.192038   66989 pod_ready.go:93] pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:32.192062   66989 pod_ready.go:82] duration metric: took 4.740656ms for pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.192075   66989 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fcxs4" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.196712   66989 pod_ready.go:93] pod "kube-proxy-fcxs4" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:32.196736   66989 pod_ready.go:82] duration metric: took 4.653538ms for pod "kube-proxy-fcxs4" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.196748   66989 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.200491   66989 pod_ready.go:93] pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:32.200517   66989 pod_ready.go:82] duration metric: took 3.758002ms for pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.200528   66989 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:34.207857   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:32.960872   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:32.961256   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:32.961284   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:32.961208   68757 retry.go:31] will retry after 2.304628323s: waiting for machine to come up
	I0829 20:26:35.267593   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:35.268009   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:35.268041   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:35.267970   68757 retry.go:31] will retry after 3.753063387s: waiting for machine to come up
	I0829 20:26:34.249231   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:34.748279   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:35.249171   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:35.748943   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:36.249181   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:36.748307   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:37.248484   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:37.748261   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:38.248332   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:38.748423   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:36.705814   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:38.708205   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:40.175557   66841 start.go:364] duration metric: took 53.54411059s to acquireMachinesLock for "no-preload-397724"
	I0829 20:26:40.175617   66841 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:26:40.175626   66841 fix.go:54] fixHost starting: 
	I0829 20:26:40.176060   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:40.176098   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:40.193828   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45897
	I0829 20:26:40.194231   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:40.194840   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:26:40.194867   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:40.195175   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:40.195364   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:40.195528   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:26:40.197109   66841 fix.go:112] recreateIfNeeded on no-preload-397724: state=Stopped err=<nil>
	I0829 20:26:40.197128   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	W0829 20:26:40.197278   66841 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:26:40.199263   66841 out.go:177] * Restarting existing kvm2 VM for "no-preload-397724" ...
	I0829 20:26:39.023902   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.024374   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Found IP for machine: 192.168.72.140
	I0829 20:26:39.024399   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has current primary IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.024413   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Reserving static IP address...
	I0829 20:26:39.024832   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Reserved static IP address: 192.168.72.140
	I0829 20:26:39.024856   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for SSH to be available...
	I0829 20:26:39.024894   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-145096", mac: "52:54:00:36:fe:e0", ip: "192.168.72.140"} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.024925   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | skip adding static IP to network mk-default-k8s-diff-port-145096 - found existing host DHCP lease matching {name: "default-k8s-diff-port-145096", mac: "52:54:00:36:fe:e0", ip: "192.168.72.140"}
	I0829 20:26:39.024947   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Getting to WaitForSSH function...
	I0829 20:26:39.026796   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.027100   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.027129   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.027265   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Using SSH client type: external
	I0829 20:26:39.027288   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa (-rw-------)
	I0829 20:26:39.027318   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:39.027333   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | About to run SSH command:
	I0829 20:26:39.027346   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | exit 0
	I0829 20:26:39.146830   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:39.147242   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetConfigRaw
	I0829 20:26:39.147931   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetIP
	I0829 20:26:39.150652   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.151055   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.151084   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.151395   68084 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/config.json ...
	I0829 20:26:39.151581   68084 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:39.151601   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:39.151814   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.153861   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.154189   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.154222   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.154351   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.154575   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.154746   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.154875   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.155010   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:39.155219   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:39.155235   68084 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:39.258973   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:39.259006   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetMachineName
	I0829 20:26:39.259261   68084 buildroot.go:166] provisioning hostname "default-k8s-diff-port-145096"
	I0829 20:26:39.259292   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetMachineName
	I0829 20:26:39.259467   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.262018   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.262472   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.262501   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.262707   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.262886   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.263034   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.263185   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.263344   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:39.263530   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:39.263547   68084 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-145096 && echo "default-k8s-diff-port-145096" | sudo tee /etc/hostname
	I0829 20:26:39.379437   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-145096
	
	I0829 20:26:39.379479   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.382263   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.382682   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.382704   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.382913   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.383128   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.383280   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.383389   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.383520   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:39.383675   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:39.383692   68084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-145096' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-145096/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-145096' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:39.491756   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:39.491790   68084 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:39.491855   68084 buildroot.go:174] setting up certificates
	I0829 20:26:39.491869   68084 provision.go:84] configureAuth start
	I0829 20:26:39.491883   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetMachineName
	I0829 20:26:39.492150   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetIP
	I0829 20:26:39.494882   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.495241   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.495269   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.495452   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.497708   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.497980   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.498013   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.498097   68084 provision.go:143] copyHostCerts
	I0829 20:26:39.498157   68084 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:39.498179   68084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:39.498249   68084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:39.498347   68084 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:39.498356   68084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:39.498377   68084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:39.498430   68084 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:39.498437   68084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:39.498455   68084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:39.498507   68084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-145096 san=[127.0.0.1 192.168.72.140 default-k8s-diff-port-145096 localhost minikube]
	I0829 20:26:39.584313   68084 provision.go:177] copyRemoteCerts
	I0829 20:26:39.584372   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:39.584398   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.587054   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.587377   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.587400   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.587630   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.587823   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.587952   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.588087   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:26:39.664394   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:39.688852   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0829 20:26:39.714653   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 20:26:39.737662   68084 provision.go:87] duration metric: took 245.781265ms to configureAuth
	I0829 20:26:39.737687   68084 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:39.737844   68084 config.go:182] Loaded profile config "default-k8s-diff-port-145096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:26:39.737911   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.740391   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.740659   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.740688   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.740911   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.741107   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.741256   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.741434   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.741612   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:39.741777   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:39.741794   68084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:39.954811   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:39.954846   68084 machine.go:96] duration metric: took 803.251945ms to provisionDockerMachine
	I0829 20:26:39.954862   68084 start.go:293] postStartSetup for "default-k8s-diff-port-145096" (driver="kvm2")
	I0829 20:26:39.954877   68084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:39.954898   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:39.955237   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:39.955267   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.958071   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.958575   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.958605   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.958772   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.958969   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.959126   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.959287   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:26:40.037153   68084 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:40.041150   68084 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:40.041176   68084 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:40.041235   68084 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:40.041325   68084 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:40.041415   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:40.050654   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:40.073789   68084 start.go:296] duration metric: took 118.907407ms for postStartSetup
	I0829 20:26:40.073826   68084 fix.go:56] duration metric: took 18.558388385s for fixHost
	I0829 20:26:40.073846   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:40.076397   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.076749   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:40.076789   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.076999   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:40.077200   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:40.077374   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:40.077480   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:40.077598   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:40.077754   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:40.077765   68084 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:40.175410   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963200.123461148
	
	I0829 20:26:40.175431   68084 fix.go:216] guest clock: 1724963200.123461148
	I0829 20:26:40.175437   68084 fix.go:229] Guest: 2024-08-29 20:26:40.123461148 +0000 UTC Remote: 2024-08-29 20:26:40.073830105 +0000 UTC m=+143.488576066 (delta=49.631043ms)
	I0829 20:26:40.175456   68084 fix.go:200] guest clock delta is within tolerance: 49.631043ms
	I0829 20:26:40.175463   68084 start.go:83] releasing machines lock for "default-k8s-diff-port-145096", held for 18.660059953s
	I0829 20:26:40.175497   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:40.175781   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetIP
	I0829 20:26:40.179031   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.179457   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:40.179495   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.179695   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:40.180256   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:40.180444   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:40.180528   68084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:40.180581   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:40.180706   68084 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:40.180729   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:40.183580   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.183819   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.183963   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:40.183989   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.184172   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:40.184174   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:40.184213   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.184345   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:40.184416   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:40.184511   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:40.184624   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:40.184626   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:40.184794   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:26:40.184896   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:26:40.259854   68084 ssh_runner.go:195] Run: systemctl --version
	I0829 20:26:40.290102   68084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:26:40.439112   68084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:26:40.449465   68084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:26:40.449546   68084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:26:40.471182   68084 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:26:40.471209   68084 start.go:495] detecting cgroup driver to use...
	I0829 20:26:40.471276   68084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:26:40.492605   68084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:26:40.508500   68084 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:26:40.508561   68084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:26:40.527534   68084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:26:40.542013   68084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:26:40.663843   68084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:26:40.837228   68084 docker.go:233] disabling docker service ...
	I0829 20:26:40.837293   68084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:26:40.854285   68084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:26:40.870148   68084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:26:41.017156   68084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:26:41.150436   68084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:26:41.165239   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:26:41.184783   68084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 20:26:41.184847   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.197358   68084 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:26:41.197417   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.211222   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.225297   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.237205   68084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:26:41.249875   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.261928   68084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.286145   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.299119   68084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:26:41.313001   68084 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:26:41.313062   68084 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:26:41.335390   68084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:26:41.348803   68084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:41.464387   68084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:26:41.564675   68084 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:26:41.564746   68084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:26:41.569620   68084 start.go:563] Will wait 60s for crictl version
	I0829 20:26:41.569680   68084 ssh_runner.go:195] Run: which crictl
	I0829 20:26:41.573519   68084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:26:41.615105   68084 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:26:41.615190   68084 ssh_runner.go:195] Run: crio --version
	I0829 20:26:41.644597   68084 ssh_runner.go:195] Run: crio --version
	I0829 20:26:41.678211   68084 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 20:26:39.248306   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:39.748958   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:40.248975   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:40.748948   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:41.249144   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:41.749013   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:42.248363   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:42.748624   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:43.248833   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:43.748535   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:40.200748   66841 main.go:141] libmachine: (no-preload-397724) Calling .Start
	I0829 20:26:40.200955   66841 main.go:141] libmachine: (no-preload-397724) Ensuring networks are active...
	I0829 20:26:40.201793   66841 main.go:141] libmachine: (no-preload-397724) Ensuring network default is active
	I0829 20:26:40.202128   66841 main.go:141] libmachine: (no-preload-397724) Ensuring network mk-no-preload-397724 is active
	I0829 20:26:40.202729   66841 main.go:141] libmachine: (no-preload-397724) Getting domain xml...
	I0829 20:26:40.203538   66841 main.go:141] libmachine: (no-preload-397724) Creating domain...
	I0829 20:26:41.516739   66841 main.go:141] libmachine: (no-preload-397724) Waiting to get IP...
	I0829 20:26:41.517840   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:41.518273   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:41.518353   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:41.518262   68926 retry.go:31] will retry after 295.070588ms: waiting for machine to come up
	I0829 20:26:41.814782   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:41.815346   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:41.815369   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:41.815291   68926 retry.go:31] will retry after 239.48527ms: waiting for machine to come up
	I0829 20:26:42.056957   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:42.057459   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:42.057509   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:42.057436   68926 retry.go:31] will retry after 452.012872ms: waiting for machine to come up
	I0829 20:26:42.511068   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:42.511551   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:42.511590   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:42.511520   68926 retry.go:31] will retry after 552.227159ms: waiting for machine to come up
	I0829 20:26:43.066096   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:43.066642   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:43.066673   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:43.066605   68926 retry.go:31] will retry after 666.699647ms: waiting for machine to come up
	I0829 20:26:43.734695   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:43.735402   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:43.735430   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:43.735309   68926 retry.go:31] will retry after 770.756485ms: waiting for machine to come up
	I0829 20:26:40.709553   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:42.712799   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:41.679441   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetIP
	I0829 20:26:41.682807   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:41.683205   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:41.683236   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:41.683489   68084 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0829 20:26:41.688766   68084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:41.705764   68084 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-145096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:26:41.705918   68084 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:26:41.705977   68084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:41.752884   68084 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 20:26:41.752955   68084 ssh_runner.go:195] Run: which lz4
	I0829 20:26:41.757600   68084 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 20:26:41.762158   68084 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 20:26:41.762188   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 20:26:43.201094   68084 crio.go:462] duration metric: took 1.443534343s to copy over tarball
	I0829 20:26:43.201176   68084 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 20:26:45.400911   68084 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.199703125s)
	I0829 20:26:45.400942   68084 crio.go:469] duration metric: took 2.199820098s to extract the tarball
	I0829 20:26:45.400948   68084 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 20:26:45.439120   68084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:45.482658   68084 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 20:26:45.482679   68084 cache_images.go:84] Images are preloaded, skipping loading
	I0829 20:26:45.482687   68084 kubeadm.go:934] updating node { 192.168.72.140 8444 v1.31.0 crio true true} ...
	I0829 20:26:45.482801   68084 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-145096 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:26:45.482873   68084 ssh_runner.go:195] Run: crio config
	I0829 20:26:45.532108   68084 cni.go:84] Creating CNI manager for ""
	I0829 20:26:45.532132   68084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:45.532146   68084 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:26:45.532169   68084 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.140 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-145096 NodeName:default-k8s-diff-port-145096 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 20:26:45.532310   68084 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.140
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-145096"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:26:45.532367   68084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 20:26:45.542670   68084 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:26:45.542744   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:26:45.552622   68084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0829 20:26:45.569765   68084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:26:45.590972   68084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0829 20:26:45.611421   68084 ssh_runner.go:195] Run: grep 192.168.72.140	control-plane.minikube.internal$ /etc/hosts
	I0829 20:26:45.615585   68084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:45.627911   68084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:45.757504   68084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:45.776103   68084 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096 for IP: 192.168.72.140
	I0829 20:26:45.776128   68084 certs.go:194] generating shared ca certs ...
	I0829 20:26:45.776159   68084 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:45.776337   68084 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:26:45.776388   68084 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:26:45.776400   68084 certs.go:256] generating profile certs ...
	I0829 20:26:45.776511   68084 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/client.key
	I0829 20:26:45.776600   68084 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/apiserver.key.5a49b6b2
	I0829 20:26:45.776650   68084 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/proxy-client.key
	I0829 20:26:45.776788   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:26:45.776827   68084 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:26:45.776840   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:26:45.776869   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:26:45.776940   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:26:45.776977   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:26:45.777035   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:45.777916   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:26:45.823419   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:26:45.868291   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:26:45.905178   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:26:45.934956   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0829 20:26:45.967570   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 20:26:45.994332   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:26:46.019268   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 20:26:46.044075   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:26:46.067906   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:26:46.092513   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:26:46.117686   68084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:26:46.137048   68084 ssh_runner.go:195] Run: openssl version
	I0829 20:26:46.143203   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:26:46.156407   68084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:46.161397   68084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:46.161461   68084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:46.167587   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:26:46.179034   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:26:46.190204   68084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:26:46.194953   68084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:26:46.195010   68084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:26:46.203121   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:26:46.218606   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:26:46.233586   68084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:26:46.240100   68084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:26:46.240155   68084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:26:46.247473   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:26:46.259417   68084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:26:46.264875   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:26:46.270914   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:26:46.277211   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:26:46.283138   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:26:46.289137   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:26:46.295044   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:26:46.301027   68084 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-145096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:26:46.301120   68084 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:26:46.301177   68084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:46.342913   68084 cri.go:89] found id: ""
	I0829 20:26:46.342988   68084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:26:46.354198   68084 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:26:46.354221   68084 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:26:46.354269   68084 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:26:46.364173   68084 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:26:46.365182   68084 kubeconfig.go:125] found "default-k8s-diff-port-145096" server: "https://192.168.72.140:8444"
	I0829 20:26:46.367560   68084 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:26:46.377550   68084 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.140
	I0829 20:26:46.377584   68084 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:26:46.377596   68084 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:26:46.377647   68084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:46.419141   68084 cri.go:89] found id: ""
	I0829 20:26:46.419215   68084 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:26:46.438037   68084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:26:46.449021   68084 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:26:46.449041   68084 kubeadm.go:157] found existing configuration files:
	
	I0829 20:26:46.449093   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0829 20:26:46.459396   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:26:46.459445   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:26:46.469964   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0829 20:26:46.479604   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:26:46.479655   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:26:46.492672   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0829 20:26:46.504656   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:26:46.504714   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:26:46.520206   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0829 20:26:46.532067   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:26:46.532137   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:26:46.541931   68084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:26:46.551973   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:44.248615   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:44.748528   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:45.248257   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:45.748453   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:46.248927   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:46.748628   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:47.248556   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:47.748332   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.248373   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.749111   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:44.507808   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:44.508340   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:44.508375   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:44.508288   68926 retry.go:31] will retry after 754.614285ms: waiting for machine to come up
	I0829 20:26:45.264587   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:45.265039   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:45.265065   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:45.265003   68926 retry.go:31] will retry after 1.3758308s: waiting for machine to come up
	I0829 20:26:46.642139   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:46.642666   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:46.642690   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:46.642612   68926 retry.go:31] will retry after 1.255043608s: waiting for machine to come up
	I0829 20:26:47.899849   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:47.900330   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:47.900360   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:47.900291   68926 retry.go:31] will retry after 1.517293529s: waiting for machine to come up
	I0829 20:26:45.208067   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:48.177040   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:46.668397   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:47.497182   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:47.725573   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:47.785427   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:47.850878   68084 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:26:47.850972   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.351404   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.852023   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.351402   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.367249   68084 api_server.go:72] duration metric: took 1.516370766s to wait for apiserver process to appear ...
	I0829 20:26:49.367283   68084 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:26:49.367312   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:51.595653   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:26:51.595683   68084 api_server.go:103] status: https://192.168.72.140:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:26:51.595698   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:51.609883   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:26:51.609989   68084 api_server.go:103] status: https://192.168.72.140:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:26:51.867454   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:51.872297   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:51.872328   68084 api_server.go:103] status: https://192.168.72.140:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:52.367462   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:52.375300   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:52.375333   68084 api_server.go:103] status: https://192.168.72.140:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:52.867827   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:52.872814   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 200:
	ok
	I0829 20:26:52.881061   68084 api_server.go:141] control plane version: v1.31.0
	I0829 20:26:52.881092   68084 api_server.go:131] duration metric: took 3.513801329s to wait for apiserver health ...
	I0829 20:26:52.881102   68084 cni.go:84] Creating CNI manager for ""
	I0829 20:26:52.881111   68084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:52.882993   68084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:26:49.248291   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.748360   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:50.248427   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:50.749087   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:51.248381   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:51.748488   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:52.249250   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:52.748715   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:53.249248   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:53.748915   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.419781   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:49.420286   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:49.420314   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:49.420244   68926 retry.go:31] will retry after 2.638145598s: waiting for machine to come up
	I0829 20:26:52.059935   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:52.060367   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:52.060411   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:52.060341   68926 retry.go:31] will retry after 2.696474949s: waiting for machine to come up
	I0829 20:26:50.207945   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:52.709407   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:52.884310   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:26:52.901134   68084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:26:52.931390   68084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:26:52.952109   68084 system_pods.go:59] 8 kube-system pods found
	I0829 20:26:52.952154   68084 system_pods.go:61] "coredns-6f6b679f8f-5mkxp" [1d3c3a01-1fa6-4d1d-8750-deef4475ba96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:26:52.952166   68084 system_pods.go:61] "etcd-default-k8s-diff-port-145096" [03096d69-48af-4372-9fa0-5a45dcb9603c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 20:26:52.952177   68084 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-145096" [4be8793a-7934-4c89-a840-49e769673f5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 20:26:52.952188   68084 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-145096" [a3bec7f8-8163-4afa-af53-282ad755b788] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 20:26:52.952202   68084 system_pods.go:61] "kube-proxy-b4ffx" [d97e74d5-21d4-4c96-9d94-77767fc4e609] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 20:26:52.952210   68084 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-145096" [c416b52b-ebf4-4714-bed6-3d25bfaa373c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 20:26:52.952217   68084 system_pods.go:61] "metrics-server-6867b74b74-5kk6q" [e74224b1-8242-4f7f-b8d6-7d9d4839be53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:26:52.952224   68084 system_pods.go:61] "storage-provisioner" [4e97da7c-af4b-40b3-83fb-82b6c2a2adef] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 20:26:52.952236   68084 system_pods.go:74] duration metric: took 20.81979ms to wait for pod list to return data ...
	I0829 20:26:52.952245   68084 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:26:52.961169   68084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:26:52.961202   68084 node_conditions.go:123] node cpu capacity is 2
	I0829 20:26:52.961214   68084 node_conditions.go:105] duration metric: took 8.963546ms to run NodePressure ...
	I0829 20:26:52.961234   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:53.425201   68084 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 20:26:53.429605   68084 kubeadm.go:739] kubelet initialised
	I0829 20:26:53.429625   68084 kubeadm.go:740] duration metric: took 4.401784ms waiting for restarted kubelet to initialise ...
	I0829 20:26:53.429632   68084 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:26:53.434501   68084 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-5mkxp" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:55.442290   68084 pod_ready.go:103] pod "coredns-6f6b679f8f-5mkxp" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:54.248998   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:54.748438   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:55.249066   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:55.749293   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:56.248457   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:56.748509   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:57.248949   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:57.748228   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:58.248717   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:58.748412   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:54.760175   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:54.760689   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:54.760736   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:54.760667   68926 retry.go:31] will retry after 3.651969786s: waiting for machine to come up
	I0829 20:26:58.415601   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.416019   66841 main.go:141] libmachine: (no-preload-397724) Found IP for machine: 192.168.50.214
	I0829 20:26:58.416045   66841 main.go:141] libmachine: (no-preload-397724) Reserving static IP address...
	I0829 20:26:58.416063   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has current primary IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.416507   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "no-preload-397724", mac: "52:54:00:e9:bf:ac", ip: "192.168.50.214"} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.416533   66841 main.go:141] libmachine: (no-preload-397724) DBG | skip adding static IP to network mk-no-preload-397724 - found existing host DHCP lease matching {name: "no-preload-397724", mac: "52:54:00:e9:bf:ac", ip: "192.168.50.214"}
	I0829 20:26:58.416543   66841 main.go:141] libmachine: (no-preload-397724) Reserved static IP address: 192.168.50.214
	I0829 20:26:58.416552   66841 main.go:141] libmachine: (no-preload-397724) Waiting for SSH to be available...
	I0829 20:26:58.416562   66841 main.go:141] libmachine: (no-preload-397724) DBG | Getting to WaitForSSH function...
	I0829 20:26:58.418849   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.419170   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.419199   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.419312   66841 main.go:141] libmachine: (no-preload-397724) DBG | Using SSH client type: external
	I0829 20:26:58.419351   66841 main.go:141] libmachine: (no-preload-397724) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa (-rw-------)
	I0829 20:26:58.419397   66841 main.go:141] libmachine: (no-preload-397724) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:58.419414   66841 main.go:141] libmachine: (no-preload-397724) DBG | About to run SSH command:
	I0829 20:26:58.419444   66841 main.go:141] libmachine: (no-preload-397724) DBG | exit 0
	I0829 20:26:58.542594   66841 main.go:141] libmachine: (no-preload-397724) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:58.542925   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetConfigRaw
	I0829 20:26:58.543582   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetIP
	I0829 20:26:58.546057   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.546384   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.546422   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.546691   66841 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/config.json ...
	I0829 20:26:58.546871   66841 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:58.546890   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:58.547113   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:58.549493   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.549816   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.549854   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.549972   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:58.550140   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.550260   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.550388   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:58.550581   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:58.550805   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:58.550822   66841 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:58.658784   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:58.658827   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:26:58.659063   66841 buildroot.go:166] provisioning hostname "no-preload-397724"
	I0829 20:26:58.659083   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:26:58.659220   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:58.661932   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.662294   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.662320   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.662485   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:58.662695   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.662880   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.663011   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:58.663168   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:58.663343   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:58.663356   66841 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-397724 && echo "no-preload-397724" | sudo tee /etc/hostname
	I0829 20:26:58.790591   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-397724
	
	I0829 20:26:58.790618   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:58.793294   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.793612   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.793639   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.793849   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:58.794035   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.794192   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.794289   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:58.794430   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:58.794656   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:58.794678   66841 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-397724' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-397724/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-397724' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:58.915925   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:58.915958   66841 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:58.915981   66841 buildroot.go:174] setting up certificates
	I0829 20:26:58.915991   66841 provision.go:84] configureAuth start
	I0829 20:26:58.916000   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:26:58.916279   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetIP
	I0829 20:26:58.919034   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.919385   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.919415   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.919523   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:58.921483   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.921805   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.921831   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.922015   66841 provision.go:143] copyHostCerts
	I0829 20:26:58.922062   66841 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:58.922079   66841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:58.922135   66841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:58.922242   66841 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:58.922256   66841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:58.922288   66841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:58.922365   66841 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:58.922375   66841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:58.922400   66841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:58.922491   66841 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.no-preload-397724 san=[127.0.0.1 192.168.50.214 localhost minikube no-preload-397724]
	I0829 20:26:55.206462   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:57.207175   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:59.207454   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:59.264390   66841 provision.go:177] copyRemoteCerts
	I0829 20:26:59.264446   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:59.264467   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.267259   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.267603   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.267626   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.267794   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.268014   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.268190   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.268367   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:26:59.353746   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:59.378289   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0829 20:26:59.402330   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 20:26:59.425412   66841 provision.go:87] duration metric: took 509.408381ms to configureAuth
	I0829 20:26:59.425442   66841 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:59.425616   66841 config.go:182] Loaded profile config "no-preload-397724": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:26:59.425679   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.428148   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.428503   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.428545   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.428698   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.428906   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.429077   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.429227   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.429365   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:59.429511   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:59.429524   66841 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:59.666382   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:59.666408   66841 machine.go:96] duration metric: took 1.11952301s to provisionDockerMachine
	I0829 20:26:59.666422   66841 start.go:293] postStartSetup for "no-preload-397724" (driver="kvm2")
	I0829 20:26:59.666436   66841 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:59.666458   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.666833   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:59.666881   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.669407   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.669725   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.669751   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.669888   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.670073   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.670214   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.670316   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:26:59.753440   66841 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:59.758408   66841 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:59.758431   66841 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:59.758509   66841 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:59.758632   66841 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:59.758753   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:59.768355   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:59.792742   66841 start.go:296] duration metric: took 126.308201ms for postStartSetup
	I0829 20:26:59.792782   66841 fix.go:56] duration metric: took 19.617155195s for fixHost
	I0829 20:26:59.792806   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.795380   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.795744   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.795781   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.795917   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.796124   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.796237   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.796376   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.796488   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:59.796668   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:59.796680   66841 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:59.903539   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963219.868600963
	
	I0829 20:26:59.903564   66841 fix.go:216] guest clock: 1724963219.868600963
	I0829 20:26:59.903574   66841 fix.go:229] Guest: 2024-08-29 20:26:59.868600963 +0000 UTC Remote: 2024-08-29 20:26:59.792787483 +0000 UTC m=+355.719318860 (delta=75.81348ms)
	I0829 20:26:59.903623   66841 fix.go:200] guest clock delta is within tolerance: 75.81348ms
	I0829 20:26:59.903632   66841 start.go:83] releasing machines lock for "no-preload-397724", held for 19.728042303s
	I0829 20:26:59.903676   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.903967   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetIP
	I0829 20:26:59.906798   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.907183   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.907212   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.907378   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.907804   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.907970   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.908038   66841 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:59.908072   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.908324   66841 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:59.908346   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.910843   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.911025   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.911187   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.911215   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.911325   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.911415   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.911437   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.911485   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.911640   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.911649   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.911847   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.911848   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:26:59.911978   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.912119   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:27:00.023116   66841 ssh_runner.go:195] Run: systemctl --version
	I0829 20:27:00.029346   66841 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:27:00.169122   66841 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:27:00.176823   66841 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:27:00.176913   66841 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:27:00.194795   66841 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:27:00.194836   66841 start.go:495] detecting cgroup driver to use...
	I0829 20:27:00.194906   66841 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:27:00.212145   66841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:27:00.226584   66841 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:27:00.226656   66841 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:27:00.240525   66841 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:27:00.256847   66841 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:27:00.371938   66841 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:27:00.516891   66841 docker.go:233] disabling docker service ...
	I0829 20:27:00.516964   66841 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:27:00.531127   66841 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:27:00.543483   66841 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:27:00.672033   66841 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:27:00.794828   66841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:27:00.809204   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:27:00.828484   66841 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 20:27:00.828547   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.839273   66841 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:27:00.839344   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.850336   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.860980   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.871661   66841 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:27:00.884343   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.895190   66841 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.912700   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.923383   66841 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:27:00.934168   66841 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:27:00.934231   66841 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:27:00.948181   66841 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:27:00.959121   66841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:27:01.072055   66841 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:27:01.163024   66841 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:27:01.163104   66841 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:27:01.167949   66841 start.go:563] Will wait 60s for crictl version
	I0829 20:27:01.168011   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.171707   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:27:01.212950   66841 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:27:01.213031   66841 ssh_runner.go:195] Run: crio --version
	I0829 20:27:01.242181   66841 ssh_runner.go:195] Run: crio --version
	I0829 20:27:01.276389   66841 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 20:26:57.441729   68084 pod_ready.go:93] pod "coredns-6f6b679f8f-5mkxp" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:57.441753   68084 pod_ready.go:82] duration metric: took 4.007206558s for pod "coredns-6f6b679f8f-5mkxp" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:57.441762   68084 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:59.448210   68084 pod_ready.go:103] pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:59.248692   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:59.748815   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:00.248257   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:00.748264   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:01.249241   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:01.748894   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:02.249045   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:02.748765   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:03.248902   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:03.748333   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:01.277829   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetIP
	I0829 20:27:01.280762   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:27:01.281144   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:27:01.281171   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:27:01.281367   66841 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0829 20:27:01.285714   66841 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:27:01.297903   66841 kubeadm.go:883] updating cluster {Name:no-preload-397724 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-397724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:27:01.298010   66841 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:27:01.298041   66841 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:27:01.331474   66841 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 20:27:01.331498   66841 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 20:27:01.331566   66841 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:01.331572   66841 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.331609   66841 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.331632   66841 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.331643   66841 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.331615   66841 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0829 20:27:01.331737   66841 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.331758   66841 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.333182   66841 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.333233   66841 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.333206   66841 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.333195   66841 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.333191   66841 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:01.333278   66841 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.333191   66841 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.333333   66841 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0829 20:27:01.507028   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.514096   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.526653   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.530292   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.531828   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.534432   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.550465   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0829 20:27:01.613161   66841 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0829 20:27:01.613209   66841 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.613287   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.631193   66841 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0829 20:27:01.631236   66841 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.631285   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.687868   66841 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0829 20:27:01.687911   66841 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.687967   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.700369   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:01.713036   66841 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0829 20:27:01.713102   66841 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.713159   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.722934   66841 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0829 20:27:01.722991   66841 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.723042   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.722941   66841 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0829 20:27:01.723130   66841 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.723159   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.785242   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.785246   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.785342   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.785391   66841 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0829 20:27:01.785438   66841 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:01.785450   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.785474   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.785479   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.785534   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.925322   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.925371   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.925374   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.925474   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.925518   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.925569   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.925593   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:02.072628   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:02.072690   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:02.072744   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:02.072822   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:02.072867   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:02.176999   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0829 20:27:02.177031   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:02.177503   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:02.177507   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 20:27:02.177572   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0829 20:27:02.177581   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0829 20:27:02.177678   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0829 20:27:02.177682   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 20:27:02.185515   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0829 20:27:02.185585   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:02.185624   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0829 20:27:02.259015   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0829 20:27:02.259076   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0829 20:27:02.259087   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0829 20:27:02.259106   66841 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 20:27:02.259113   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0829 20:27:02.259138   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0829 20:27:02.259147   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 20:27:02.259155   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 20:27:02.259152   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0829 20:27:02.259139   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0829 20:27:02.259157   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 20:27:02.259240   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0829 20:27:01.208076   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:03.208339   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:01.954153   68084 pod_ready.go:103] pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:03.454991   68084 pod_ready.go:93] pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:03.455023   68084 pod_ready.go:82] duration metric: took 6.013253793s for pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:03.455036   68084 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:05.461938   68084 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:04.249082   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:04.748738   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:05.248398   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:05.749056   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:06.248693   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:06.748904   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:07.249145   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:07.749131   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:08.248774   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:08.748444   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:04.630344   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.371149915s)
	I0829 20:27:04.630373   66841 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0: (2.371188324s)
	I0829 20:27:04.630410   66841 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.371191825s)
	I0829 20:27:04.630432   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0829 20:27:04.630413   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0829 20:27:04.630379   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0829 20:27:04.630465   66841 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.371187188s)
	I0829 20:27:04.630478   66841 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 20:27:04.630481   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0829 20:27:04.630561   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 20:27:06.684986   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.054398317s)
	I0829 20:27:06.685019   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0829 20:27:06.685047   66841 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0829 20:27:06.685098   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0829 20:27:05.707657   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:07.708034   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:06.965873   68084 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:06.965904   68084 pod_ready.go:82] duration metric: took 3.51085868s for pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.965918   68084 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.976464   68084 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:06.976489   68084 pod_ready.go:82] duration metric: took 10.562771ms for pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.976502   68084 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b4ffx" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.982178   68084 pod_ready.go:93] pod "kube-proxy-b4ffx" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:06.982197   68084 pod_ready.go:82] duration metric: took 5.687889ms for pod "kube-proxy-b4ffx" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.982205   68084 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.987316   68084 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:06.987333   68084 pod_ready.go:82] duration metric: took 5.122275ms for pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.987342   68084 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:08.994794   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:11.493940   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:09.248746   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:09.748722   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:10.249074   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:10.748647   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:11.248236   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:11.749057   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:12.249227   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:12.748688   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:13.249248   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:13.749298   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:10.365120   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.679993065s)
	I0829 20:27:10.365150   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0829 20:27:10.365182   66841 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0829 20:27:10.365256   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0829 20:27:12.122371   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.757087653s)
	I0829 20:27:12.122409   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0829 20:27:12.122434   66841 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 20:27:12.122564   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 20:27:13.575108   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.45251018s)
	I0829 20:27:13.575137   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0829 20:27:13.575165   66841 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 20:27:13.575210   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 20:27:09.708364   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:11.708491   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:14.207383   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:13.494124   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:15.993564   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:14.249254   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:14.748957   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:15.249229   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:15.749137   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:16.248967   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:16.748254   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:17.248929   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:17.748339   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:18.248666   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:18.748712   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:15.742286   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.16705417s)
	I0829 20:27:15.742320   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0829 20:27:15.742348   66841 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0829 20:27:15.742398   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0829 20:27:16.391977   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0829 20:27:16.392017   66841 cache_images.go:123] Successfully loaded all cached images
	I0829 20:27:16.392022   66841 cache_images.go:92] duration metric: took 15.060512795s to LoadCachedImages
	I0829 20:27:16.392034   66841 kubeadm.go:934] updating node { 192.168.50.214 8443 v1.31.0 crio true true} ...
	I0829 20:27:16.392139   66841 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-397724 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-397724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:27:16.392203   66841 ssh_runner.go:195] Run: crio config
	I0829 20:27:16.445382   66841 cni.go:84] Creating CNI manager for ""
	I0829 20:27:16.445406   66841 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:27:16.445420   66841 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:27:16.445448   66841 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.214 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-397724 NodeName:no-preload-397724 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 20:27:16.445612   66841 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-397724"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:27:16.445671   66841 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 20:27:16.456505   66841 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:27:16.456560   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:27:16.467361   66841 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0829 20:27:16.484700   66841 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:27:16.503026   66841 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0829 20:27:16.519867   66841 ssh_runner.go:195] Run: grep 192.168.50.214	control-plane.minikube.internal$ /etc/hosts
	I0829 20:27:16.523648   66841 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:27:16.535642   66841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:27:16.671027   66841 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:27:16.688692   66841 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724 for IP: 192.168.50.214
	I0829 20:27:16.688712   66841 certs.go:194] generating shared ca certs ...
	I0829 20:27:16.688727   66841 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:27:16.688883   66841 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:27:16.688944   66841 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:27:16.688957   66841 certs.go:256] generating profile certs ...
	I0829 20:27:16.689053   66841 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/client.key
	I0829 20:27:16.689132   66841 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/apiserver.key.1f535ae9
	I0829 20:27:16.689182   66841 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/proxy-client.key
	I0829 20:27:16.689360   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:27:16.689400   66841 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:27:16.689415   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:27:16.689450   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:27:16.689504   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:27:16.689540   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:27:16.689596   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:27:16.690277   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:27:16.747582   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:27:16.782064   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:27:16.816382   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:27:16.851548   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0829 20:27:16.882919   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 20:27:16.907439   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:27:16.932392   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 20:27:16.957451   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:27:16.982482   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:27:17.006032   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:27:17.030052   66841 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:27:17.047792   66841 ssh_runner.go:195] Run: openssl version
	I0829 20:27:17.053922   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:27:17.065219   66841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:27:17.069592   66841 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:27:17.069647   66841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:27:17.075853   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:27:17.086727   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:27:17.097935   66841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:27:17.102198   66841 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:27:17.102252   66841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:27:17.108031   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:27:17.119868   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:27:17.131513   66841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:27:17.136434   66841 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:27:17.136497   66841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:27:17.142219   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:27:17.153448   66841 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:27:17.158375   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:27:17.165156   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:27:17.170927   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:27:17.176669   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:27:17.182293   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:27:17.187936   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:27:17.193572   66841 kubeadm.go:392] StartCluster: {Name:no-preload-397724 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-397724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:27:17.193682   66841 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:27:17.193754   66841 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:27:17.238327   66841 cri.go:89] found id: ""
	I0829 20:27:17.238392   66841 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:27:17.248923   66841 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:27:17.248943   66841 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:27:17.248984   66841 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:27:17.263143   66841 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:27:17.264260   66841 kubeconfig.go:125] found "no-preload-397724" server: "https://192.168.50.214:8443"
	I0829 20:27:17.266448   66841 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:27:17.276347   66841 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.214
	I0829 20:27:17.276378   66841 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:27:17.276389   66841 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:27:17.276440   66841 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:27:17.311409   66841 cri.go:89] found id: ""
	I0829 20:27:17.311476   66841 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:27:17.329204   66841 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:27:17.339063   66841 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:27:17.339079   66841 kubeadm.go:157] found existing configuration files:
	
	I0829 20:27:17.339118   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:27:17.348268   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:27:17.348324   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:27:17.357596   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:27:17.366504   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:27:17.366575   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:27:17.376068   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:27:17.385156   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:27:17.385220   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:27:17.394890   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:27:17.404213   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:27:17.404283   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:27:17.413669   66841 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:27:17.423307   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:17.536003   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:17.990605   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:18.217809   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:18.297100   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:18.421185   66841 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:27:18.421283   66841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:18.922043   66841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:16.209618   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:18.707544   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:17.993609   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:19.994469   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:19.248924   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:19.748958   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:20.248851   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:20.748547   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:21.248298   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:21.748802   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:22.248680   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:22.748271   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:23.248491   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:23.748803   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:19.422030   66841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:19.442023   66841 api_server.go:72] duration metric: took 1.020839747s to wait for apiserver process to appear ...
	I0829 20:27:19.442047   66841 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:27:19.442070   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:22.444156   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:27:22.444192   66841 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:27:22.444211   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:22.466228   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:27:22.466258   66841 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:27:22.942835   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:22.949338   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:27:22.949360   66841 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:27:23.443069   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:23.447845   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:27:23.447876   66841 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:27:23.942372   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:23.946517   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 200:
	ok
	I0829 20:27:23.953497   66841 api_server.go:141] control plane version: v1.31.0
	I0829 20:27:23.953522   66841 api_server.go:131] duration metric: took 4.511467637s to wait for apiserver health ...
	I0829 20:27:23.953530   66841 cni.go:84] Creating CNI manager for ""
	I0829 20:27:23.953536   66841 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:27:23.955180   66841 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:27:23.956396   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:27:23.969429   66841 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:27:24.000989   66841 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:27:24.014200   66841 system_pods.go:59] 8 kube-system pods found
	I0829 20:27:24.014233   66841 system_pods.go:61] "coredns-6f6b679f8f-g7xxs" [f0148527-2146-4153-aa20-5ac97b664027] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:27:24.014240   66841 system_pods.go:61] "etcd-no-preload-397724" [f04b5ee4-f439-470a-b298-1a9ed569db70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 20:27:24.014248   66841 system_pods.go:61] "kube-apiserver-no-preload-397724" [2328f327-1744-4785-9266-3f992b977ef8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 20:27:24.014254   66841 system_pods.go:61] "kube-controller-manager-no-preload-397724" [0e63f04d-8627-45e9-ac80-70a0fe63f5db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 20:27:24.014260   66841 system_pods.go:61] "kube-proxy-57kbt" [9f85ce17-85a0-4a52-bdaf-4e3aee4d1a98] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 20:27:24.014267   66841 system_pods.go:61] "kube-scheduler-no-preload-397724" [106821c6-2444-470a-bac1-78838c0b1982] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 20:27:24.014273   66841 system_pods.go:61] "metrics-server-6867b74b74-668dg" [e3f3ab24-7777-40b0-a54c-00a294e7e68e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:27:24.014280   66841 system_pods.go:61] "storage-provisioner" [146bd02a-8f50-4d19-a188-4adc2bcc0a43] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 20:27:24.014288   66841 system_pods.go:74] duration metric: took 13.275941ms to wait for pod list to return data ...
	I0829 20:27:24.014298   66841 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:27:24.018932   66841 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:27:24.018956   66841 node_conditions.go:123] node cpu capacity is 2
	I0829 20:27:24.018966   66841 node_conditions.go:105] duration metric: took 4.661993ms to run NodePressure ...
	I0829 20:27:24.018981   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:21.207144   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:23.208728   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:22.493988   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:24.494152   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:24.248456   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:24.748347   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:25.248337   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:25.748905   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:26.248912   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:26.749302   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:27.249058   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:27.749105   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:28.248548   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:28.748298   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:24.305237   66841 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 20:27:24.310640   66841 kubeadm.go:739] kubelet initialised
	I0829 20:27:24.310666   66841 kubeadm.go:740] duration metric: took 5.402212ms waiting for restarted kubelet to initialise ...
	I0829 20:27:24.310679   66841 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:27:24.316568   66841 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:26.325035   66841 pod_ready.go:103] pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:28.336627   66841 pod_ready.go:103] pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:25.706496   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:27.708228   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:26.992949   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:28.993682   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:30.993877   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:29.248994   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:29.749020   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:30.248983   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:30.748247   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:31.249052   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:31.249133   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:31.293442   67607 cri.go:89] found id: ""
	I0829 20:27:31.293466   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.293473   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:31.293479   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:31.293527   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:31.333976   67607 cri.go:89] found id: ""
	I0829 20:27:31.333999   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.334006   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:31.334011   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:31.334055   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:31.373680   67607 cri.go:89] found id: ""
	I0829 20:27:31.373707   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.373715   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:31.373720   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:31.373766   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:31.407798   67607 cri.go:89] found id: ""
	I0829 20:27:31.407824   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.407832   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:31.407837   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:31.407893   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:31.444409   67607 cri.go:89] found id: ""
	I0829 20:27:31.444437   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.444445   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:31.444451   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:31.444512   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:31.479313   67607 cri.go:89] found id: ""
	I0829 20:27:31.479333   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.479341   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:31.479347   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:31.479403   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:31.516056   67607 cri.go:89] found id: ""
	I0829 20:27:31.516089   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.516100   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:31.516108   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:31.516168   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:31.555324   67607 cri.go:89] found id: ""
	I0829 20:27:31.555349   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.555357   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:31.555365   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:31.555375   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:31.626397   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:31.626434   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:31.672006   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:31.672038   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:31.724691   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:31.724727   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:31.740283   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:31.740324   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:31.874007   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:29.824509   66841 pod_ready.go:93] pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:29.824530   66841 pod_ready.go:82] duration metric: took 5.507939145s for pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:29.824547   66841 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:31.833646   66841 pod_ready.go:103] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:30.207213   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:32.706352   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:32.993932   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:35.494511   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:34.374203   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:34.387817   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:34.387888   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:34.423254   67607 cri.go:89] found id: ""
	I0829 20:27:34.423279   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.423286   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:34.423296   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:34.423343   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:34.457741   67607 cri.go:89] found id: ""
	I0829 20:27:34.457768   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.457775   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:34.457781   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:34.457827   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:34.498432   67607 cri.go:89] found id: ""
	I0829 20:27:34.498457   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.498464   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:34.498469   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:34.498523   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:34.534290   67607 cri.go:89] found id: ""
	I0829 20:27:34.534317   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.534324   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:34.534330   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:34.534380   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:34.570878   67607 cri.go:89] found id: ""
	I0829 20:27:34.570909   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.570919   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:34.570928   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:34.570986   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:34.615735   67607 cri.go:89] found id: ""
	I0829 20:27:34.615762   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.615769   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:34.615775   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:34.615824   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:34.656667   67607 cri.go:89] found id: ""
	I0829 20:27:34.656706   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.656721   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:34.656730   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:34.656779   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:34.708906   67607 cri.go:89] found id: ""
	I0829 20:27:34.708928   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.708937   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:34.708947   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:34.708962   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:34.767382   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:34.767417   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:34.786523   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:34.786574   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:34.872832   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:34.872857   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:34.872871   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:34.954581   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:34.954620   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:37.497810   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:37.511479   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:37.511539   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:37.547930   67607 cri.go:89] found id: ""
	I0829 20:27:37.547962   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.547972   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:37.547980   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:37.548035   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:37.585281   67607 cri.go:89] found id: ""
	I0829 20:27:37.585304   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.585312   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:37.585318   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:37.585365   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:37.622201   67607 cri.go:89] found id: ""
	I0829 20:27:37.622229   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.622241   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:37.622246   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:37.622295   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:37.657248   67607 cri.go:89] found id: ""
	I0829 20:27:37.657274   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.657281   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:37.657289   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:37.657335   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:37.691674   67607 cri.go:89] found id: ""
	I0829 20:27:37.691703   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.691711   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:37.691716   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:37.691764   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:37.729523   67607 cri.go:89] found id: ""
	I0829 20:27:37.729548   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.729557   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:37.729562   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:37.729609   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:37.764601   67607 cri.go:89] found id: ""
	I0829 20:27:37.764629   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.764637   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:37.764643   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:37.764705   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:37.799228   67607 cri.go:89] found id: ""
	I0829 20:27:37.799259   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.799270   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:37.799281   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:37.799301   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:37.848128   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:37.848158   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:37.862610   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:37.862640   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:37.936859   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:37.936888   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:37.936903   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:38.013647   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:38.013681   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:34.331889   66841 pod_ready.go:103] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:36.332334   66841 pod_ready.go:103] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:37.329545   66841 pod_ready.go:93] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.329566   66841 pod_ready.go:82] duration metric: took 7.50501178s for pod "etcd-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.329576   66841 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.333442   66841 pod_ready.go:93] pod "kube-apiserver-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.333458   66841 pod_ready.go:82] duration metric: took 3.876755ms for pod "kube-apiserver-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.333467   66841 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.336952   66841 pod_ready.go:93] pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.336968   66841 pod_ready.go:82] duration metric: took 3.49531ms for pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.336976   66841 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-57kbt" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.340368   66841 pod_ready.go:93] pod "kube-proxy-57kbt" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.340383   66841 pod_ready.go:82] duration metric: took 3.401844ms for pod "kube-proxy-57kbt" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.340396   66841 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.344111   66841 pod_ready.go:93] pod "kube-scheduler-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.344125   66841 pod_ready.go:82] duration metric: took 3.723924ms for pod "kube-scheduler-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.344132   66841 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:34.708682   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:37.206876   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:37.997827   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:40.494840   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:40.551395   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:40.568100   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:40.568181   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:40.616582   67607 cri.go:89] found id: ""
	I0829 20:27:40.616611   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.616623   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:40.616631   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:40.616695   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:40.690580   67607 cri.go:89] found id: ""
	I0829 20:27:40.690620   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.690631   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:40.690638   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:40.690695   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:40.733624   67607 cri.go:89] found id: ""
	I0829 20:27:40.733653   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.733662   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:40.733670   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:40.733733   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:40.767499   67607 cri.go:89] found id: ""
	I0829 20:27:40.767528   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.767538   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:40.767546   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:40.767619   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:40.806973   67607 cri.go:89] found id: ""
	I0829 20:27:40.807002   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.807009   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:40.807015   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:40.807079   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:40.842311   67607 cri.go:89] found id: ""
	I0829 20:27:40.842334   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.842341   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:40.842347   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:40.842401   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:40.880208   67607 cri.go:89] found id: ""
	I0829 20:27:40.880238   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.880248   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:40.880255   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:40.880309   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:40.918395   67607 cri.go:89] found id: ""
	I0829 20:27:40.918424   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.918435   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:40.918445   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:40.918459   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:40.972396   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:40.972437   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:40.986136   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:40.986169   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:41.064600   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:41.064623   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:41.064634   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:41.146653   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:41.146687   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:43.687773   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:43.701576   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:43.701645   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:43.737259   67607 cri.go:89] found id: ""
	I0829 20:27:43.737282   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.737289   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:43.737299   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:43.737346   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:43.772678   67607 cri.go:89] found id: ""
	I0829 20:27:43.772702   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.772709   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:43.772714   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:43.772776   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:43.806788   67607 cri.go:89] found id: ""
	I0829 20:27:43.806821   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.806831   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:43.806839   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:43.806900   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:39.350484   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:41.352279   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:43.850564   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:39.707977   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:42.207630   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:42.993571   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:44.994696   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:43.841738   67607 cri.go:89] found id: ""
	I0829 20:27:43.841759   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.841767   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:43.841772   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:43.841829   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:43.878420   67607 cri.go:89] found id: ""
	I0829 20:27:43.878449   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.878459   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:43.878466   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:43.878527   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:43.914307   67607 cri.go:89] found id: ""
	I0829 20:27:43.914335   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.914345   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:43.914352   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:43.914413   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:43.958827   67607 cri.go:89] found id: ""
	I0829 20:27:43.958853   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.958865   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:43.958871   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:43.958935   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:43.997397   67607 cri.go:89] found id: ""
	I0829 20:27:43.997423   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.997432   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:43.997442   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:43.997455   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:44.049245   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:44.049280   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:44.063473   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:44.063511   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:44.131628   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:44.131651   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:44.131666   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:44.210826   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:44.210854   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:46.754905   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:46.769531   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:46.769588   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:46.805245   67607 cri.go:89] found id: ""
	I0829 20:27:46.805272   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.805280   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:46.805285   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:46.805338   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:46.843606   67607 cri.go:89] found id: ""
	I0829 20:27:46.843637   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.843646   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:46.843654   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:46.843710   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:46.880300   67607 cri.go:89] found id: ""
	I0829 20:27:46.880326   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.880333   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:46.880338   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:46.880387   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:46.923537   67607 cri.go:89] found id: ""
	I0829 20:27:46.923562   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.923569   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:46.923574   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:46.923620   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:46.957774   67607 cri.go:89] found id: ""
	I0829 20:27:46.957806   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.957817   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:46.957826   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:46.957887   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:46.996972   67607 cri.go:89] found id: ""
	I0829 20:27:46.996995   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.997005   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:46.997013   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:46.997056   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:47.030560   67607 cri.go:89] found id: ""
	I0829 20:27:47.030588   67607 logs.go:276] 0 containers: []
	W0829 20:27:47.030606   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:47.030612   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:47.030665   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:47.068654   67607 cri.go:89] found id: ""
	I0829 20:27:47.068678   67607 logs.go:276] 0 containers: []
	W0829 20:27:47.068686   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:47.068694   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:47.068706   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:47.082335   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:47.082367   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:47.162792   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:47.162817   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:47.162829   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:47.241456   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:47.241491   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:47.282249   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:47.282274   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:45.850673   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:47.850836   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:44.707198   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:46.707222   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:49.207556   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:46.995302   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:49.498812   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:49.836268   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:49.850415   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:49.850491   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:49.887816   67607 cri.go:89] found id: ""
	I0829 20:27:49.887843   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.887851   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:49.887856   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:49.887916   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:49.923701   67607 cri.go:89] found id: ""
	I0829 20:27:49.923735   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.923745   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:49.923755   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:49.923818   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:49.958197   67607 cri.go:89] found id: ""
	I0829 20:27:49.958225   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.958236   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:49.958244   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:49.958313   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:49.995333   67607 cri.go:89] found id: ""
	I0829 20:27:49.995361   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.995373   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:49.995380   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:49.995439   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:50.034345   67607 cri.go:89] found id: ""
	I0829 20:27:50.034375   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.034382   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:50.034387   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:50.034438   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:50.070324   67607 cri.go:89] found id: ""
	I0829 20:27:50.070355   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.070365   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:50.070374   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:50.070434   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:50.107301   67607 cri.go:89] found id: ""
	I0829 20:27:50.107326   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.107334   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:50.107340   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:50.107400   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:50.144748   67607 cri.go:89] found id: ""
	I0829 20:27:50.144778   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.144788   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:50.144800   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:50.144816   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:50.183576   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:50.183606   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:50.236716   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:50.236750   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:50.251589   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:50.251612   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:50.317816   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:50.317840   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:50.317855   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:52.894572   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:52.908081   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:52.908149   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:52.945272   67607 cri.go:89] found id: ""
	I0829 20:27:52.945299   67607 logs.go:276] 0 containers: []
	W0829 20:27:52.945309   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:52.945317   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:52.945377   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:52.980237   67607 cri.go:89] found id: ""
	I0829 20:27:52.980262   67607 logs.go:276] 0 containers: []
	W0829 20:27:52.980270   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:52.980275   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:52.980325   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:53.017894   67607 cri.go:89] found id: ""
	I0829 20:27:53.017922   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.017929   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:53.017935   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:53.017991   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:53.052577   67607 cri.go:89] found id: ""
	I0829 20:27:53.052603   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.052611   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:53.052616   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:53.052667   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:53.093414   67607 cri.go:89] found id: ""
	I0829 20:27:53.093444   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.093455   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:53.093462   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:53.093523   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:53.130794   67607 cri.go:89] found id: ""
	I0829 20:27:53.130825   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.130837   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:53.130845   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:53.130902   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:53.163793   67607 cri.go:89] found id: ""
	I0829 20:27:53.163819   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.163827   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:53.163832   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:53.163882   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:53.204824   67607 cri.go:89] found id: ""
	I0829 20:27:53.204852   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.204862   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:53.204872   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:53.204885   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:53.243411   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:53.243440   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:53.296611   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:53.296642   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:53.310909   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:53.310943   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:53.385768   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:53.385790   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:53.385801   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:49.851712   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:52.350295   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:51.711115   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:54.207340   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:51.993943   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:53.996334   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:56.494226   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:55.966801   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:55.980852   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:55.980933   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:56.017682   67607 cri.go:89] found id: ""
	I0829 20:27:56.017707   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.017716   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:56.017722   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:56.017767   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:56.051556   67607 cri.go:89] found id: ""
	I0829 20:27:56.051584   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.051594   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:56.051600   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:56.051665   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:56.095301   67607 cri.go:89] found id: ""
	I0829 20:27:56.095330   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.095340   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:56.095348   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:56.095408   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:56.131161   67607 cri.go:89] found id: ""
	I0829 20:27:56.131195   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.131205   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:56.131213   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:56.131269   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:56.166611   67607 cri.go:89] found id: ""
	I0829 20:27:56.166637   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.166645   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:56.166651   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:56.166713   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:56.202818   67607 cri.go:89] found id: ""
	I0829 20:27:56.202846   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.202856   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:56.202864   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:56.202923   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:56.237855   67607 cri.go:89] found id: ""
	I0829 20:27:56.237883   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.237891   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:56.237897   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:56.237955   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:56.272402   67607 cri.go:89] found id: ""
	I0829 20:27:56.272426   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.272433   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:56.272441   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:56.272452   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:56.351628   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:56.351653   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:56.389525   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:56.389559   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:56.444952   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:56.444989   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:56.459731   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:56.459759   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:56.536888   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:54.350358   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:56.350727   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:58.352884   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:56.208050   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:58.706897   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:58.993153   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:00.993544   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:59.037744   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:59.051868   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:59.051938   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:59.087436   67607 cri.go:89] found id: ""
	I0829 20:27:59.087461   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.087467   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:59.087474   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:59.087531   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:59.123729   67607 cri.go:89] found id: ""
	I0829 20:27:59.123757   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.123765   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:59.123771   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:59.123825   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:59.168649   67607 cri.go:89] found id: ""
	I0829 20:27:59.168682   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.168690   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:59.168696   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:59.168753   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:59.209770   67607 cri.go:89] found id: ""
	I0829 20:27:59.209791   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.209803   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:59.209808   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:59.209854   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:59.248358   67607 cri.go:89] found id: ""
	I0829 20:27:59.248384   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.248392   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:59.248398   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:59.248445   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:59.281770   67607 cri.go:89] found id: ""
	I0829 20:27:59.281797   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.281805   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:59.281811   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:59.281870   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:59.317255   67607 cri.go:89] found id: ""
	I0829 20:27:59.317285   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.317295   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:59.317302   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:59.317363   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:59.354301   67607 cri.go:89] found id: ""
	I0829 20:27:59.354324   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.354332   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:59.354339   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:59.354352   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:59.438346   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:59.438382   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:59.482482   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:59.482513   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:59.540926   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:59.540961   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:59.555221   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:59.555258   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:59.622114   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:02.123276   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:02.137435   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:02.137502   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:02.176310   67607 cri.go:89] found id: ""
	I0829 20:28:02.176340   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.176347   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:02.176355   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:02.176414   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:02.216511   67607 cri.go:89] found id: ""
	I0829 20:28:02.216555   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.216562   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:02.216574   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:02.216625   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:02.260116   67607 cri.go:89] found id: ""
	I0829 20:28:02.260149   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.260158   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:02.260164   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:02.260225   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:02.301550   67607 cri.go:89] found id: ""
	I0829 20:28:02.301584   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.301600   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:02.301608   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:02.301692   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:02.335916   67607 cri.go:89] found id: ""
	I0829 20:28:02.335948   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.335959   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:02.335967   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:02.336033   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:02.372479   67607 cri.go:89] found id: ""
	I0829 20:28:02.372507   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.372515   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:02.372522   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:02.372584   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:02.406683   67607 cri.go:89] found id: ""
	I0829 20:28:02.406713   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.406721   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:02.406727   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:02.406774   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:02.443130   67607 cri.go:89] found id: ""
	I0829 20:28:02.443156   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.443164   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:02.443173   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:02.443185   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:02.485747   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:02.485777   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:02.540106   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:02.540143   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:02.556158   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:02.556188   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:02.637870   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:02.637900   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:02.637915   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:00.851416   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:03.351248   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:00.707716   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:02.708204   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:02.994108   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:04.994988   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:05.220330   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:05.233932   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:05.233994   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:05.269046   67607 cri.go:89] found id: ""
	I0829 20:28:05.269072   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.269081   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:05.269087   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:05.269134   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:05.303963   67607 cri.go:89] found id: ""
	I0829 20:28:05.303989   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.303999   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:05.304006   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:05.304065   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:05.340943   67607 cri.go:89] found id: ""
	I0829 20:28:05.340975   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.340985   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:05.340992   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:05.341061   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:05.379551   67607 cri.go:89] found id: ""
	I0829 20:28:05.379582   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.379593   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:05.379601   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:05.379659   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:05.414229   67607 cri.go:89] found id: ""
	I0829 20:28:05.414256   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.414267   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:05.414274   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:05.414339   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:05.450212   67607 cri.go:89] found id: ""
	I0829 20:28:05.450241   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.450251   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:05.450258   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:05.450318   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:05.487415   67607 cri.go:89] found id: ""
	I0829 20:28:05.487451   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.487463   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:05.487470   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:05.487529   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:05.521347   67607 cri.go:89] found id: ""
	I0829 20:28:05.521370   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.521383   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:05.521390   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:05.521402   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:05.572317   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:05.572350   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:05.585651   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:05.585680   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:05.653929   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:05.653950   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:05.653969   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:05.732843   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:05.732873   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:08.281983   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:08.295104   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:08.295166   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:08.328570   67607 cri.go:89] found id: ""
	I0829 20:28:08.328596   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.328605   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:08.328613   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:08.328684   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:08.363567   67607 cri.go:89] found id: ""
	I0829 20:28:08.363595   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.363605   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:08.363613   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:08.363672   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:08.399619   67607 cri.go:89] found id: ""
	I0829 20:28:08.399645   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.399653   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:08.399659   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:08.399707   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:08.439252   67607 cri.go:89] found id: ""
	I0829 20:28:08.439283   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.439294   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:08.439301   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:08.439357   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:08.477730   67607 cri.go:89] found id: ""
	I0829 20:28:08.477754   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.477762   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:08.477768   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:08.477834   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:08.522045   67607 cri.go:89] found id: ""
	I0829 20:28:08.522066   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.522073   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:08.522079   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:08.522137   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:08.560400   67607 cri.go:89] found id: ""
	I0829 20:28:08.560427   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.560434   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:08.560441   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:08.560504   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:08.599111   67607 cri.go:89] found id: ""
	I0829 20:28:08.599140   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.599150   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:08.599161   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:08.599175   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:08.681451   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:08.681487   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:08.722800   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:08.722835   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:08.779058   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:08.779089   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:08.796940   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:08.796963   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 20:28:05.852245   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:08.351402   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:04.708669   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:07.207124   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:07.493431   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:09.493794   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	W0829 20:28:08.868296   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:11.369316   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:11.384150   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:11.384225   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:11.418452   67607 cri.go:89] found id: ""
	I0829 20:28:11.418480   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.418488   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:11.418494   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:11.418555   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:11.451359   67607 cri.go:89] found id: ""
	I0829 20:28:11.451389   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.451400   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:11.451408   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:11.451481   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:11.488408   67607 cri.go:89] found id: ""
	I0829 20:28:11.488436   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.488446   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:11.488453   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:11.488510   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:11.528311   67607 cri.go:89] found id: ""
	I0829 20:28:11.528340   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.528351   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:11.528359   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:11.528412   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:11.571345   67607 cri.go:89] found id: ""
	I0829 20:28:11.571372   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.571382   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:11.571389   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:11.571454   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:11.606812   67607 cri.go:89] found id: ""
	I0829 20:28:11.606839   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.606850   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:11.606857   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:11.606918   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:11.652687   67607 cri.go:89] found id: ""
	I0829 20:28:11.652710   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.652717   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:11.652722   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:11.652781   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:11.687583   67607 cri.go:89] found id: ""
	I0829 20:28:11.687628   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.687645   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:11.687655   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:11.687673   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:11.727052   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:11.727086   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:11.779116   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:11.779155   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:11.792911   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:11.792949   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:11.868415   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:11.868443   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:11.868461   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:10.850225   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:13.351638   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:09.707347   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:11.709556   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:14.206996   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:11.994187   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:14.494457   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:14.447886   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:14.462144   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:14.462221   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:14.499160   67607 cri.go:89] found id: ""
	I0829 20:28:14.499185   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.499193   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:14.499200   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:14.499258   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:14.545736   67607 cri.go:89] found id: ""
	I0829 20:28:14.545764   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.545774   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:14.545780   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:14.545844   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:14.583626   67607 cri.go:89] found id: ""
	I0829 20:28:14.583664   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.583674   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:14.583682   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:14.583744   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:14.619876   67607 cri.go:89] found id: ""
	I0829 20:28:14.619909   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.619917   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:14.619923   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:14.619975   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:14.655750   67607 cri.go:89] found id: ""
	I0829 20:28:14.655778   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.655786   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:14.655791   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:14.655848   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:14.690759   67607 cri.go:89] found id: ""
	I0829 20:28:14.690785   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.690795   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:14.690800   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:14.690850   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:14.727238   67607 cri.go:89] found id: ""
	I0829 20:28:14.727269   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.727282   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:14.727289   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:14.727344   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:14.765962   67607 cri.go:89] found id: ""
	I0829 20:28:14.765996   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.766006   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:14.766017   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:14.766033   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:14.835749   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:14.835779   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:14.835797   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:14.914075   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:14.914112   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:14.952684   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:14.952712   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:15.004598   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:15.004635   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:17.518949   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:17.532175   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:17.532250   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:17.569943   67607 cri.go:89] found id: ""
	I0829 20:28:17.569971   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.569979   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:17.569985   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:17.570044   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:17.605472   67607 cri.go:89] found id: ""
	I0829 20:28:17.605502   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.605510   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:17.605515   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:17.605566   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:17.641568   67607 cri.go:89] found id: ""
	I0829 20:28:17.641593   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.641603   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:17.641610   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:17.641669   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:17.680870   67607 cri.go:89] found id: ""
	I0829 20:28:17.680895   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.680905   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:17.680916   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:17.680981   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:17.723546   67607 cri.go:89] found id: ""
	I0829 20:28:17.723576   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.723587   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:17.723594   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:17.723659   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:17.757934   67607 cri.go:89] found id: ""
	I0829 20:28:17.757962   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.757973   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:17.757980   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:17.758028   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:17.792641   67607 cri.go:89] found id: ""
	I0829 20:28:17.792670   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.792679   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:17.792685   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:17.792738   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:17.830776   67607 cri.go:89] found id: ""
	I0829 20:28:17.830800   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.830807   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:17.830815   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:17.830825   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:17.886331   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:17.886377   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:17.900111   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:17.900135   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:17.969538   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:17.969563   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:17.969577   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:18.050609   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:18.050649   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:15.850497   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:17.851663   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:16.707415   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:19.207313   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:16.994325   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:19.494247   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:20.590686   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:20.605066   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:20.605121   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:20.646028   67607 cri.go:89] found id: ""
	I0829 20:28:20.646058   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.646074   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:20.646082   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:20.646143   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:20.683433   67607 cri.go:89] found id: ""
	I0829 20:28:20.683469   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.683479   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:20.683487   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:20.683567   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:20.722737   67607 cri.go:89] found id: ""
	I0829 20:28:20.722765   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.722775   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:20.722782   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:20.722841   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:20.759777   67607 cri.go:89] found id: ""
	I0829 20:28:20.759800   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.759807   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:20.759812   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:20.759864   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:20.799142   67607 cri.go:89] found id: ""
	I0829 20:28:20.799164   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.799170   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:20.799176   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:20.799223   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:20.838331   67607 cri.go:89] found id: ""
	I0829 20:28:20.838357   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.838365   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:20.838371   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:20.838427   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:20.878066   67607 cri.go:89] found id: ""
	I0829 20:28:20.878099   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.878110   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:20.878117   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:20.878175   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:20.928940   67607 cri.go:89] found id: ""
	I0829 20:28:20.928966   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.928975   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:20.928982   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:20.928993   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:20.984435   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:20.984471   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:21.005860   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:21.005900   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:21.084092   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:21.084123   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:21.084138   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:21.165971   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:21.166009   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:23.705033   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:23.718332   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:23.718390   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:23.753594   67607 cri.go:89] found id: ""
	I0829 20:28:23.753625   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.753635   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:23.753650   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:23.753715   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:23.791840   67607 cri.go:89] found id: ""
	I0829 20:28:23.791864   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.791872   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:23.791878   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:23.791930   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:20.350028   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:22.350487   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:21.207839   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:23.707197   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:21.993965   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:23.994879   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:26.493735   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:23.837815   67607 cri.go:89] found id: ""
	I0829 20:28:23.837839   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.837846   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:23.837851   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:23.837908   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:23.873155   67607 cri.go:89] found id: ""
	I0829 20:28:23.873184   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.873194   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:23.873201   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:23.873265   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:23.908728   67607 cri.go:89] found id: ""
	I0829 20:28:23.908757   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.908768   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:23.908774   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:23.908834   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:23.946286   67607 cri.go:89] found id: ""
	I0829 20:28:23.946310   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.946320   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:23.946328   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:23.946392   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:23.983078   67607 cri.go:89] found id: ""
	I0829 20:28:23.983105   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.983115   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:23.983129   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:23.983190   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:24.020601   67607 cri.go:89] found id: ""
	I0829 20:28:24.020634   67607 logs.go:276] 0 containers: []
	W0829 20:28:24.020644   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:24.020654   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:24.020669   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:24.034438   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:24.034463   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:24.103209   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:24.103230   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:24.103243   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:24.182977   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:24.183016   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:24.224743   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:24.224834   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:26.781507   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:26.794301   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:26.794387   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:26.827218   67607 cri.go:89] found id: ""
	I0829 20:28:26.827243   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.827250   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:26.827257   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:26.827303   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:26.862643   67607 cri.go:89] found id: ""
	I0829 20:28:26.862673   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.862685   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:26.862693   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:26.862743   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:26.898127   67607 cri.go:89] found id: ""
	I0829 20:28:26.898159   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.898169   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:26.898177   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:26.898237   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:26.932119   67607 cri.go:89] found id: ""
	I0829 20:28:26.932146   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.932167   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:26.932174   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:26.932241   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:26.966380   67607 cri.go:89] found id: ""
	I0829 20:28:26.966413   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.966421   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:26.966427   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:26.966478   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:27.004350   67607 cri.go:89] found id: ""
	I0829 20:28:27.004372   67607 logs.go:276] 0 containers: []
	W0829 20:28:27.004379   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:27.004386   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:27.004436   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:27.041171   67607 cri.go:89] found id: ""
	I0829 20:28:27.041199   67607 logs.go:276] 0 containers: []
	W0829 20:28:27.041206   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:27.041212   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:27.041257   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:27.073993   67607 cri.go:89] found id: ""
	I0829 20:28:27.074031   67607 logs.go:276] 0 containers: []
	W0829 20:28:27.074041   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:27.074053   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:27.074066   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:27.148169   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:27.148199   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:27.148214   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:27.227174   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:27.227212   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:27.267180   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:27.267230   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:27.319034   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:27.319066   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:24.350754   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:26.850582   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:26.207974   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:28.707820   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:28.494090   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:30.994157   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:29.833497   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:29.846883   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:29.846951   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:29.884133   67607 cri.go:89] found id: ""
	I0829 20:28:29.884163   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.884175   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:29.884182   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:29.884247   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:29.917594   67607 cri.go:89] found id: ""
	I0829 20:28:29.917618   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.917628   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:29.917636   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:29.917696   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:29.952537   67607 cri.go:89] found id: ""
	I0829 20:28:29.952568   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.952576   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:29.952582   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:29.952630   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:29.988410   67607 cri.go:89] found id: ""
	I0829 20:28:29.988441   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.988448   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:29.988454   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:29.988511   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:30.026761   67607 cri.go:89] found id: ""
	I0829 20:28:30.026788   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.026796   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:30.026802   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:30.026861   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:30.063010   67607 cri.go:89] found id: ""
	I0829 20:28:30.063037   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.063046   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:30.063054   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:30.063109   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:30.098067   67607 cri.go:89] found id: ""
	I0829 20:28:30.098093   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.098101   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:30.098107   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:30.098161   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:30.132887   67607 cri.go:89] found id: ""
	I0829 20:28:30.132914   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.132921   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:30.132928   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:30.132940   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:30.184955   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:30.184990   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:30.198966   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:30.199004   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:30.268950   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:30.268977   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:30.268991   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:30.354222   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:30.354260   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:32.896554   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:32.911188   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:32.911271   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:32.945726   67607 cri.go:89] found id: ""
	I0829 20:28:32.945750   67607 logs.go:276] 0 containers: []
	W0829 20:28:32.945758   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:32.945773   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:32.945829   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:32.980234   67607 cri.go:89] found id: ""
	I0829 20:28:32.980267   67607 logs.go:276] 0 containers: []
	W0829 20:28:32.980275   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:32.980281   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:32.980329   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:33.019031   67607 cri.go:89] found id: ""
	I0829 20:28:33.019063   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.019071   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:33.019076   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:33.019126   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:33.056290   67607 cri.go:89] found id: ""
	I0829 20:28:33.056314   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.056322   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:33.056327   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:33.056391   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:33.090038   67607 cri.go:89] found id: ""
	I0829 20:28:33.090068   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.090078   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:33.090086   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:33.090152   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:33.125742   67607 cri.go:89] found id: ""
	I0829 20:28:33.125774   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.125782   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:33.125787   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:33.125849   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:33.159019   67607 cri.go:89] found id: ""
	I0829 20:28:33.159047   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.159058   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:33.159065   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:33.159125   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:33.197900   67607 cri.go:89] found id: ""
	I0829 20:28:33.197925   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.197933   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:33.197941   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:33.197955   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:33.250010   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:33.250040   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:33.263348   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:33.263374   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:33.342037   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:33.342065   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:33.342082   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:33.423324   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:33.423361   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:29.350275   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:31.350994   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:33.850866   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:30.713472   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:33.207271   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:32.995169   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:35.493980   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:35.963734   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:35.978648   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:35.978713   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:36.015326   67607 cri.go:89] found id: ""
	I0829 20:28:36.015350   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.015358   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:36.015364   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:36.015411   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:36.050840   67607 cri.go:89] found id: ""
	I0829 20:28:36.050869   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.050879   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:36.050886   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:36.050947   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:36.084048   67607 cri.go:89] found id: ""
	I0829 20:28:36.084076   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.084084   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:36.084090   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:36.084138   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:36.118655   67607 cri.go:89] found id: ""
	I0829 20:28:36.118682   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.118693   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:36.118702   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:36.118762   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:36.153879   67607 cri.go:89] found id: ""
	I0829 20:28:36.153908   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.153918   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:36.153926   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:36.153988   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:36.199834   67607 cri.go:89] found id: ""
	I0829 20:28:36.199858   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.199866   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:36.199872   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:36.199927   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:36.238098   67607 cri.go:89] found id: ""
	I0829 20:28:36.238129   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.238139   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:36.238146   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:36.238208   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:36.272091   67607 cri.go:89] found id: ""
	I0829 20:28:36.272124   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.272135   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:36.272146   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:36.272162   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:36.338478   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:36.338498   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:36.338510   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:36.418637   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:36.418671   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:36.458167   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:36.458194   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:36.508592   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:36.508630   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:36.351066   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:38.849684   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:35.706813   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:37.708058   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:38.003178   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:40.493065   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:39.022668   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:39.035897   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:39.035971   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:39.071155   67607 cri.go:89] found id: ""
	I0829 20:28:39.071185   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.071196   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:39.071203   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:39.071258   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:39.104135   67607 cri.go:89] found id: ""
	I0829 20:28:39.104177   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.104188   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:39.104206   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:39.104266   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:39.138301   67607 cri.go:89] found id: ""
	I0829 20:28:39.138329   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.138339   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:39.138346   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:39.138404   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:39.172674   67607 cri.go:89] found id: ""
	I0829 20:28:39.172700   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.172708   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:39.172719   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:39.172779   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:39.209810   67607 cri.go:89] found id: ""
	I0829 20:28:39.209836   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.209845   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:39.209852   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:39.209915   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:39.248692   67607 cri.go:89] found id: ""
	I0829 20:28:39.248715   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.248722   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:39.248728   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:39.248798   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:39.284303   67607 cri.go:89] found id: ""
	I0829 20:28:39.284333   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.284343   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:39.284351   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:39.284401   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:39.321346   67607 cri.go:89] found id: ""
	I0829 20:28:39.321375   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.321386   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:39.321396   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:39.321410   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:39.334678   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:39.334710   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:39.421992   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:39.422014   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:39.422027   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:39.503250   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:39.503280   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:39.540623   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:39.540654   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:42.092131   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:42.105440   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:42.105498   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:42.140994   67607 cri.go:89] found id: ""
	I0829 20:28:42.141024   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.141034   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:42.141042   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:42.141102   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:42.175182   67607 cri.go:89] found id: ""
	I0829 20:28:42.175217   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.175228   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:42.175248   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:42.175319   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:42.209251   67607 cri.go:89] found id: ""
	I0829 20:28:42.209281   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.209291   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:42.209299   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:42.209362   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:42.247944   67607 cri.go:89] found id: ""
	I0829 20:28:42.247970   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.247977   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:42.247983   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:42.248028   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:42.285613   67607 cri.go:89] found id: ""
	I0829 20:28:42.285644   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.285651   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:42.285657   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:42.285722   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:42.319826   67607 cri.go:89] found id: ""
	I0829 20:28:42.319851   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.319858   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:42.319864   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:42.319928   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:42.357150   67607 cri.go:89] found id: ""
	I0829 20:28:42.357173   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.357182   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:42.357189   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:42.357243   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:42.392150   67607 cri.go:89] found id: ""
	I0829 20:28:42.392170   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.392178   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:42.392185   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:42.392197   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:42.469240   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:42.469271   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:42.469286   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:42.549165   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:42.549198   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:42.591900   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:42.591930   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:42.642593   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:42.642625   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:40.851544   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:43.350420   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:39.708341   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:42.206888   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:44.207934   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:42.494791   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:44.992992   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:45.157092   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:45.170832   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:45.170916   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:45.207210   67607 cri.go:89] found id: ""
	I0829 20:28:45.207235   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.207244   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:45.207251   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:45.207308   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:45.245321   67607 cri.go:89] found id: ""
	I0829 20:28:45.245352   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.245362   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:45.245379   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:45.245448   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:45.280326   67607 cri.go:89] found id: ""
	I0829 20:28:45.280369   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.280381   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:45.280389   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:45.280451   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:45.318294   67607 cri.go:89] found id: ""
	I0829 20:28:45.318322   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.318333   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:45.318340   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:45.318411   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:45.352903   67607 cri.go:89] found id: ""
	I0829 20:28:45.352925   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.352932   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:45.352938   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:45.352990   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:45.389251   67607 cri.go:89] found id: ""
	I0829 20:28:45.389273   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.389280   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:45.389286   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:45.389340   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:45.424348   67607 cri.go:89] found id: ""
	I0829 20:28:45.424385   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.424397   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:45.424404   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:45.424453   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:45.459058   67607 cri.go:89] found id: ""
	I0829 20:28:45.459087   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.459098   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:45.459109   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:45.459124   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:45.510386   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:45.510423   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:45.524896   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:45.524923   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:45.593987   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:45.594064   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:45.594082   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:45.668738   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:45.668771   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:48.206497   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:48.219625   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:48.219696   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:48.254936   67607 cri.go:89] found id: ""
	I0829 20:28:48.254959   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.254966   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:48.254971   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:48.255018   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:48.290826   67607 cri.go:89] found id: ""
	I0829 20:28:48.290851   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.290859   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:48.290864   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:48.290910   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:48.327508   67607 cri.go:89] found id: ""
	I0829 20:28:48.327533   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.327540   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:48.327546   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:48.327593   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:48.364492   67607 cri.go:89] found id: ""
	I0829 20:28:48.364517   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.364525   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:48.364530   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:48.364580   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:48.400035   67607 cri.go:89] found id: ""
	I0829 20:28:48.400062   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.400072   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:48.400079   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:48.400144   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:48.433999   67607 cri.go:89] found id: ""
	I0829 20:28:48.434026   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.434035   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:48.434043   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:48.434104   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:48.468841   67607 cri.go:89] found id: ""
	I0829 20:28:48.468873   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.468889   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:48.468903   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:48.468971   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:48.506557   67607 cri.go:89] found id: ""
	I0829 20:28:48.506589   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.506598   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:48.506609   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:48.506624   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:48.577023   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:48.577044   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:48.577056   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:48.654372   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:48.654407   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:48.691125   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:48.691152   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:48.746383   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:48.746414   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:45.350581   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:47.351437   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:46.705575   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:48.707018   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:46.993532   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:48.994284   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:51.494177   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:51.260591   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:51.273911   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:51.273974   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:51.311517   67607 cri.go:89] found id: ""
	I0829 20:28:51.311545   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.311553   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:51.311567   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:51.311616   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:51.348220   67607 cri.go:89] found id: ""
	I0829 20:28:51.348247   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.348256   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:51.348264   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:51.348321   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:51.383560   67607 cri.go:89] found id: ""
	I0829 20:28:51.383599   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.383611   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:51.383619   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:51.383680   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:51.419241   67607 cri.go:89] found id: ""
	I0829 20:28:51.419268   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.419278   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:51.419286   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:51.419343   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:51.453954   67607 cri.go:89] found id: ""
	I0829 20:28:51.453979   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.453986   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:51.453992   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:51.454047   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:51.489457   67607 cri.go:89] found id: ""
	I0829 20:28:51.489480   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.489488   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:51.489493   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:51.489544   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:51.524072   67607 cri.go:89] found id: ""
	I0829 20:28:51.524100   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.524107   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:51.524113   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:51.524160   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:51.561238   67607 cri.go:89] found id: ""
	I0829 20:28:51.561263   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.561271   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:51.561279   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:51.561290   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:51.615422   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:51.615462   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:51.632180   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:51.632216   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:51.704335   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:51.704363   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:51.704378   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:51.794219   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:51.794260   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:49.852140   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:52.351142   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:51.205903   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:53.207651   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:53.495412   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:55.993489   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:54.342556   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:54.356325   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:54.356400   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:54.390928   67607 cri.go:89] found id: ""
	I0829 20:28:54.390952   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.390959   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:54.390965   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:54.391011   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:54.426970   67607 cri.go:89] found id: ""
	I0829 20:28:54.427002   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.427013   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:54.427020   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:54.427074   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:54.464121   67607 cri.go:89] found id: ""
	I0829 20:28:54.464155   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.464166   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:54.464174   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:54.464236   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:54.499790   67607 cri.go:89] found id: ""
	I0829 20:28:54.499816   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.499827   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:54.499840   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:54.499889   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:54.537212   67607 cri.go:89] found id: ""
	I0829 20:28:54.537239   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.537249   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:54.537256   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:54.537314   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:54.575370   67607 cri.go:89] found id: ""
	I0829 20:28:54.575399   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.575410   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:54.575417   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:54.575469   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:54.608403   67607 cri.go:89] found id: ""
	I0829 20:28:54.608432   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.608443   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:54.608453   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:54.608514   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:54.645259   67607 cri.go:89] found id: ""
	I0829 20:28:54.645285   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.645292   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:54.645300   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:54.645311   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:54.697022   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:54.697063   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:54.712873   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:54.712914   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:54.814253   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:54.814278   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:54.814295   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:54.896473   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:54.896507   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:57.441648   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:57.455245   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:57.455321   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:57.495365   67607 cri.go:89] found id: ""
	I0829 20:28:57.495397   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.495405   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:57.495411   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:57.495472   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:57.529555   67607 cri.go:89] found id: ""
	I0829 20:28:57.529582   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.529590   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:57.529597   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:57.529667   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:57.564168   67607 cri.go:89] found id: ""
	I0829 20:28:57.564196   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.564208   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:57.564215   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:57.564277   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:57.602057   67607 cri.go:89] found id: ""
	I0829 20:28:57.602089   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.602100   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:57.602108   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:57.602194   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:57.638195   67607 cri.go:89] found id: ""
	I0829 20:28:57.638226   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.638235   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:57.638244   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:57.638307   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:57.674556   67607 cri.go:89] found id: ""
	I0829 20:28:57.674605   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.674615   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:57.674623   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:57.674680   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:57.709256   67607 cri.go:89] found id: ""
	I0829 20:28:57.709282   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.709291   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:57.709298   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:57.709358   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:57.743629   67607 cri.go:89] found id: ""
	I0829 20:28:57.743652   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.743659   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:57.743668   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:57.743679   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:57.789067   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:57.789098   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:57.843372   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:57.843403   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:57.858630   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:57.858661   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:57.927776   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:57.927798   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:57.927814   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:54.850906   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:56.851300   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:55.208638   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:57.707756   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:57.994287   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:00.493343   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:00.508180   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:00.521451   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:00.521529   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:00.557912   67607 cri.go:89] found id: ""
	I0829 20:29:00.557938   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.557945   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:00.557951   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:00.557997   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:00.595186   67607 cri.go:89] found id: ""
	I0829 20:29:00.595215   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.595226   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:00.595237   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:00.595299   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:00.631553   67607 cri.go:89] found id: ""
	I0829 20:29:00.631581   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.631592   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:00.631600   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:00.631660   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:00.666502   67607 cri.go:89] found id: ""
	I0829 20:29:00.666525   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.666551   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:00.666560   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:00.666621   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:00.700797   67607 cri.go:89] found id: ""
	I0829 20:29:00.700824   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.700835   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:00.700842   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:00.700908   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:00.739957   67607 cri.go:89] found id: ""
	I0829 20:29:00.739976   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.739989   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:00.739994   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:00.740035   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:00.800704   67607 cri.go:89] found id: ""
	I0829 20:29:00.800740   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.800750   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:00.800757   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:00.800820   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:00.837678   67607 cri.go:89] found id: ""
	I0829 20:29:00.837704   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.837712   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:00.837720   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:00.837731   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:00.888359   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:00.888391   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:00.903074   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:00.903103   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:00.964865   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:00.964885   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:00.964898   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:01.049351   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:01.049387   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:03.589829   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:03.603120   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:03.603192   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:03.637647   67607 cri.go:89] found id: ""
	I0829 20:29:03.637672   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.637678   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:03.637684   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:03.637732   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:03.673807   67607 cri.go:89] found id: ""
	I0829 20:29:03.673842   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.673852   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:03.673860   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:03.673918   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:03.709490   67607 cri.go:89] found id: ""
	I0829 20:29:03.709516   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.709527   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:03.709533   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:03.709595   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:03.751662   67607 cri.go:89] found id: ""
	I0829 20:29:03.751688   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.751696   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:03.751702   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:03.751751   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:03.787861   67607 cri.go:89] found id: ""
	I0829 20:29:03.787896   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.787908   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:03.787917   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:03.787977   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:59.350888   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:01.850615   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:03.851438   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:00.207912   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:02.707309   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:02.493506   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:04.494305   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:03.824383   67607 cri.go:89] found id: ""
	I0829 20:29:03.824413   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.824431   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:03.824438   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:03.824499   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:03.863904   67607 cri.go:89] found id: ""
	I0829 20:29:03.863929   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.863937   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:03.863943   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:03.863990   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:03.902336   67607 cri.go:89] found id: ""
	I0829 20:29:03.902360   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.902368   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:03.902375   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:03.902386   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:03.951468   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:03.951499   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:03.965789   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:03.965816   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:04.035096   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:04.035119   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:04.035193   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:04.115842   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:04.115876   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:06.662652   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:06.676508   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:06.676583   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:06.713058   67607 cri.go:89] found id: ""
	I0829 20:29:06.713084   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.713093   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:06.713101   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:06.713171   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:06.747513   67607 cri.go:89] found id: ""
	I0829 20:29:06.747544   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.747552   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:06.747557   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:06.747617   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:06.782662   67607 cri.go:89] found id: ""
	I0829 20:29:06.782689   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.782695   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:06.782701   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:06.782758   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:06.818472   67607 cri.go:89] found id: ""
	I0829 20:29:06.818500   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.818510   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:06.818516   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:06.818586   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:06.852928   67607 cri.go:89] found id: ""
	I0829 20:29:06.852954   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.852964   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:06.852974   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:06.853032   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:06.893859   67607 cri.go:89] found id: ""
	I0829 20:29:06.893889   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.893899   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:06.893907   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:06.893969   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:06.931552   67607 cri.go:89] found id: ""
	I0829 20:29:06.931584   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.931594   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:06.931601   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:06.931662   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:06.967210   67607 cri.go:89] found id: ""
	I0829 20:29:06.967243   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.967254   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:06.967266   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:06.967279   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:07.020595   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:07.020631   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:07.034738   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:07.034764   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:07.103726   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:07.103747   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:07.103760   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:07.184727   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:07.184764   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:06.350610   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:08.351571   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:05.207055   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:07.207650   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:06.994653   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:09.493932   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:09.746639   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:09.761228   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:09.761308   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:09.802071   67607 cri.go:89] found id: ""
	I0829 20:29:09.802102   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.802113   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:09.802122   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:09.802180   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:09.837352   67607 cri.go:89] found id: ""
	I0829 20:29:09.837385   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.837395   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:09.837402   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:09.837464   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:09.874951   67607 cri.go:89] found id: ""
	I0829 20:29:09.874980   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.874992   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:09.874999   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:09.875055   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:09.909660   67607 cri.go:89] found id: ""
	I0829 20:29:09.909696   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.909706   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:09.909713   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:09.909777   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:09.949727   67607 cri.go:89] found id: ""
	I0829 20:29:09.949751   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.949759   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:09.949765   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:09.949825   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:09.984576   67607 cri.go:89] found id: ""
	I0829 20:29:09.984609   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.984617   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:09.984623   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:09.984675   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:10.022499   67607 cri.go:89] found id: ""
	I0829 20:29:10.022523   67607 logs.go:276] 0 containers: []
	W0829 20:29:10.022530   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:10.022553   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:10.022624   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:10.064308   67607 cri.go:89] found id: ""
	I0829 20:29:10.064346   67607 logs.go:276] 0 containers: []
	W0829 20:29:10.064356   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:10.064367   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:10.064382   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:10.113505   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:10.113537   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:10.127614   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:10.127640   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:10.200558   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:10.200579   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:10.200592   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:10.292984   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:10.293020   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:12.833100   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:12.846645   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:12.846712   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:12.885396   67607 cri.go:89] found id: ""
	I0829 20:29:12.885423   67607 logs.go:276] 0 containers: []
	W0829 20:29:12.885430   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:12.885436   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:12.885486   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:12.922556   67607 cri.go:89] found id: ""
	I0829 20:29:12.922584   67607 logs.go:276] 0 containers: []
	W0829 20:29:12.922595   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:12.922602   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:12.922688   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:12.965294   67607 cri.go:89] found id: ""
	I0829 20:29:12.965324   67607 logs.go:276] 0 containers: []
	W0829 20:29:12.965335   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:12.965342   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:12.965401   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:13.022911   67607 cri.go:89] found id: ""
	I0829 20:29:13.022934   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.022942   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:13.022948   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:13.023009   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:13.077009   67607 cri.go:89] found id: ""
	I0829 20:29:13.077035   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.077043   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:13.077048   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:13.077095   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:13.114202   67607 cri.go:89] found id: ""
	I0829 20:29:13.114233   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.114243   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:13.114251   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:13.114315   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:13.147025   67607 cri.go:89] found id: ""
	I0829 20:29:13.147049   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.147057   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:13.147063   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:13.147110   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:13.183112   67607 cri.go:89] found id: ""
	I0829 20:29:13.183138   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.183148   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:13.183159   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:13.183173   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:13.240558   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:13.240595   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:13.255563   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:13.255589   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:13.322826   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:13.322846   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:13.322857   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:13.399330   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:13.399365   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:10.850650   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:12.852188   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:09.706791   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:11.707397   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:13.708663   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:11.993311   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:13.994310   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:16.494854   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:15.938467   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:15.951742   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:15.951812   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:15.987492   67607 cri.go:89] found id: ""
	I0829 20:29:15.987517   67607 logs.go:276] 0 containers: []
	W0829 20:29:15.987524   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:15.987530   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:15.987575   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:16.024187   67607 cri.go:89] found id: ""
	I0829 20:29:16.024214   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.024223   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:16.024231   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:16.024291   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:16.058141   67607 cri.go:89] found id: ""
	I0829 20:29:16.058164   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.058171   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:16.058176   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:16.058225   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:16.092390   67607 cri.go:89] found id: ""
	I0829 20:29:16.092414   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.092421   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:16.092427   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:16.092472   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:16.130178   67607 cri.go:89] found id: ""
	I0829 20:29:16.130209   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.130219   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:16.130227   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:16.130289   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:16.163867   67607 cri.go:89] found id: ""
	I0829 20:29:16.163900   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.163907   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:16.163913   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:16.163964   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:16.197764   67607 cri.go:89] found id: ""
	I0829 20:29:16.197792   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.197798   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:16.197804   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:16.197850   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:16.233357   67607 cri.go:89] found id: ""
	I0829 20:29:16.233383   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.233393   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:16.233403   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:16.233418   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:16.285154   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:16.285188   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:16.299057   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:16.299085   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:16.377021   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:16.377041   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:16.377062   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:16.457750   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:16.457796   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:15.350415   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:17.850927   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:16.206841   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:18.207273   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:18.993478   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:21.493806   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:18.999133   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:19.016143   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:19.016223   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:19.049225   67607 cri.go:89] found id: ""
	I0829 20:29:19.049252   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.049259   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:19.049265   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:19.049317   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:19.085237   67607 cri.go:89] found id: ""
	I0829 20:29:19.085297   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.085314   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:19.085325   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:19.085389   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:19.123476   67607 cri.go:89] found id: ""
	I0829 20:29:19.123501   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.123509   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:19.123514   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:19.123571   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:19.159958   67607 cri.go:89] found id: ""
	I0829 20:29:19.159984   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.159993   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:19.160001   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:19.160055   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:19.192385   67607 cri.go:89] found id: ""
	I0829 20:29:19.192410   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.192418   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:19.192423   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:19.192483   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:19.230781   67607 cri.go:89] found id: ""
	I0829 20:29:19.230804   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.230811   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:19.230816   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:19.230868   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:19.264925   67607 cri.go:89] found id: ""
	I0829 20:29:19.264954   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.264964   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:19.264972   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:19.265032   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:19.302461   67607 cri.go:89] found id: ""
	I0829 20:29:19.302484   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.302491   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:19.302499   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:19.302510   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:19.384799   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:19.384833   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:19.425281   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:19.425313   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:19.477380   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:19.477412   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:19.492315   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:19.492350   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:19.563428   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:22.064407   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:22.078609   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:22.078670   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:22.112630   67607 cri.go:89] found id: ""
	I0829 20:29:22.112662   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.112672   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:22.112680   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:22.112741   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:22.149078   67607 cri.go:89] found id: ""
	I0829 20:29:22.149108   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.149117   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:22.149124   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:22.149186   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:22.184568   67607 cri.go:89] found id: ""
	I0829 20:29:22.184596   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.184605   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:22.184613   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:22.184682   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:22.220881   67607 cri.go:89] found id: ""
	I0829 20:29:22.220908   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.220919   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:22.220926   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:22.220987   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:22.256280   67607 cri.go:89] found id: ""
	I0829 20:29:22.256305   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.256314   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:22.256321   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:22.256386   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:22.294546   67607 cri.go:89] found id: ""
	I0829 20:29:22.294580   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.294590   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:22.294597   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:22.294660   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:22.332178   67607 cri.go:89] found id: ""
	I0829 20:29:22.332207   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.332215   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:22.332220   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:22.332266   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:22.368283   67607 cri.go:89] found id: ""
	I0829 20:29:22.368309   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.368317   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:22.368325   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:22.368336   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:22.421800   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:22.421836   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:22.435539   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:22.435565   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:22.504402   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:22.504427   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:22.504441   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:22.588293   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:22.588326   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:19.851801   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:22.351929   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:20.207342   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:22.707546   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:23.493994   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:25.993337   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:25.130766   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:25.144479   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:25.144554   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:25.181606   67607 cri.go:89] found id: ""
	I0829 20:29:25.181636   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.181643   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:25.181649   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:25.181697   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:25.220291   67607 cri.go:89] found id: ""
	I0829 20:29:25.220320   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.220328   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:25.220335   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:25.220447   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:25.260947   67607 cri.go:89] found id: ""
	I0829 20:29:25.260975   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.260983   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:25.260988   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:25.261035   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:25.298200   67607 cri.go:89] found id: ""
	I0829 20:29:25.298232   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.298243   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:25.298256   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:25.298314   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:25.333128   67607 cri.go:89] found id: ""
	I0829 20:29:25.333162   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.333174   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:25.333181   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:25.333232   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:25.368951   67607 cri.go:89] found id: ""
	I0829 20:29:25.368979   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.368989   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:25.368997   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:25.369052   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:25.403687   67607 cri.go:89] found id: ""
	I0829 20:29:25.403715   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.403726   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:25.403734   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:25.403799   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:25.442338   67607 cri.go:89] found id: ""
	I0829 20:29:25.442365   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.442372   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:25.442381   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:25.442395   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:25.456313   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:25.456335   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:25.528709   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:25.528730   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:25.528744   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:25.609976   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:25.610011   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:25.650044   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:25.650071   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:28.202683   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:28.216971   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:28.217046   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:28.256297   67607 cri.go:89] found id: ""
	I0829 20:29:28.256321   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.256329   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:28.256335   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:28.256379   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:28.289396   67607 cri.go:89] found id: ""
	I0829 20:29:28.289420   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.289427   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:28.289433   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:28.289484   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:28.323589   67607 cri.go:89] found id: ""
	I0829 20:29:28.323616   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.323623   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:28.323630   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:28.323676   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:28.362423   67607 cri.go:89] found id: ""
	I0829 20:29:28.362453   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.362463   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:28.362471   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:28.362531   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:28.396967   67607 cri.go:89] found id: ""
	I0829 20:29:28.396990   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.396998   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:28.397003   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:28.397053   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:28.430714   67607 cri.go:89] found id: ""
	I0829 20:29:28.430744   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.430755   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:28.430762   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:28.430831   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:28.468668   67607 cri.go:89] found id: ""
	I0829 20:29:28.468696   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.468707   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:28.468714   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:28.468777   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:28.506678   67607 cri.go:89] found id: ""
	I0829 20:29:28.506705   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.506716   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:28.506727   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:28.506741   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:28.545259   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:28.545287   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:28.598249   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:28.598285   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:28.612385   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:28.612429   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:28.685765   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:28.685792   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:28.685806   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:24.851688   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:27.350456   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:24.708523   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:27.206094   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:29.207859   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:27.995492   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:30.494340   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:31.270074   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:31.284357   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:31.284417   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:31.319530   67607 cri.go:89] found id: ""
	I0829 20:29:31.319558   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.319566   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:31.319571   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:31.319640   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:31.356826   67607 cri.go:89] found id: ""
	I0829 20:29:31.356856   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.356867   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:31.356880   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:31.356934   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:31.390137   67607 cri.go:89] found id: ""
	I0829 20:29:31.390160   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.390167   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:31.390173   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:31.390219   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:31.424939   67607 cri.go:89] found id: ""
	I0829 20:29:31.424972   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.424989   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:31.424997   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:31.425054   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:31.460896   67607 cri.go:89] found id: ""
	I0829 20:29:31.460921   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.460928   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:31.460935   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:31.460985   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:31.498933   67607 cri.go:89] found id: ""
	I0829 20:29:31.498957   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.498967   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:31.498975   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:31.499044   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:31.534953   67607 cri.go:89] found id: ""
	I0829 20:29:31.534985   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.534996   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:31.535003   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:31.535065   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:31.576248   67607 cri.go:89] found id: ""
	I0829 20:29:31.576273   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.576281   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:31.576291   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:31.576307   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:31.628157   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:31.628196   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:31.641564   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:31.641591   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:31.719949   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:31.719973   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:31.719996   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:31.795682   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:31.795716   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:29.351248   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:31.351424   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:33.851397   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:31.707552   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:34.207468   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:32.993432   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:34.993634   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:34.333468   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:34.347294   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:34.347370   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:34.384885   67607 cri.go:89] found id: ""
	I0829 20:29:34.384910   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.384921   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:34.384928   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:34.384991   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:34.422309   67607 cri.go:89] found id: ""
	I0829 20:29:34.422341   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.422351   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:34.422358   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:34.422417   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:34.459800   67607 cri.go:89] found id: ""
	I0829 20:29:34.459826   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.459834   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:34.459840   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:34.459905   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:34.495600   67607 cri.go:89] found id: ""
	I0829 20:29:34.495624   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.495633   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:34.495647   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:34.495708   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:34.531749   67607 cri.go:89] found id: ""
	I0829 20:29:34.531777   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.531788   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:34.531795   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:34.531856   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:34.571057   67607 cri.go:89] found id: ""
	I0829 20:29:34.571088   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.571098   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:34.571105   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:34.571168   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:34.609645   67607 cri.go:89] found id: ""
	I0829 20:29:34.609676   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.609687   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:34.609695   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:34.609753   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:34.647199   67607 cri.go:89] found id: ""
	I0829 20:29:34.647233   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.647244   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:34.647255   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:34.647269   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:34.661390   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:34.661420   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:34.737590   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:34.737613   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:34.737625   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:34.820682   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:34.820721   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:34.861697   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:34.861723   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:37.412384   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:37.426081   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:37.426162   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:37.461302   67607 cri.go:89] found id: ""
	I0829 20:29:37.461332   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.461342   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:37.461349   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:37.461416   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:37.500869   67607 cri.go:89] found id: ""
	I0829 20:29:37.500898   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.500908   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:37.500915   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:37.500970   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:37.536908   67607 cri.go:89] found id: ""
	I0829 20:29:37.536932   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.536942   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:37.536949   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:37.537010   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:37.571939   67607 cri.go:89] found id: ""
	I0829 20:29:37.571969   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.571979   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:37.571987   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:37.572048   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:37.607834   67607 cri.go:89] found id: ""
	I0829 20:29:37.607864   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.607883   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:37.607891   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:37.607952   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:37.643932   67607 cri.go:89] found id: ""
	I0829 20:29:37.643963   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.643971   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:37.643978   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:37.644037   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:37.678148   67607 cri.go:89] found id: ""
	I0829 20:29:37.678177   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.678188   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:37.678195   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:37.678257   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:37.713170   67607 cri.go:89] found id: ""
	I0829 20:29:37.713195   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.713209   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:37.713219   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:37.713233   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:37.752538   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:37.752567   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:37.802888   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:37.802923   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:37.816546   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:37.816585   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:37.891647   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:37.891667   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:37.891680   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:35.851668   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:38.351371   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:36.208220   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:38.708523   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:36.994441   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:39.493291   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:40.472354   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:40.486186   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:40.486252   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:40.520935   67607 cri.go:89] found id: ""
	I0829 20:29:40.520963   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.520971   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:40.520977   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:40.521037   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:40.561399   67607 cri.go:89] found id: ""
	I0829 20:29:40.561428   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.561440   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:40.561447   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:40.561514   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:40.601821   67607 cri.go:89] found id: ""
	I0829 20:29:40.601846   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.601855   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:40.601862   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:40.601918   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:40.636429   67607 cri.go:89] found id: ""
	I0829 20:29:40.636454   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.636462   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:40.636468   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:40.636525   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:40.670781   67607 cri.go:89] found id: ""
	I0829 20:29:40.670816   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.670828   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:40.670836   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:40.670912   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:40.706635   67607 cri.go:89] found id: ""
	I0829 20:29:40.706663   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.706674   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:40.706682   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:40.706739   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:40.741657   67607 cri.go:89] found id: ""
	I0829 20:29:40.741687   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.741695   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:40.741707   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:40.741770   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:40.777028   67607 cri.go:89] found id: ""
	I0829 20:29:40.777057   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.777066   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:40.777077   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:40.777093   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:40.829387   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:40.829424   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:40.843928   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:40.843956   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:40.917965   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:40.917992   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:40.918008   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:41.001880   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:41.001925   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:43.549007   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:43.563446   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:43.563502   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:43.598503   67607 cri.go:89] found id: ""
	I0829 20:29:43.598548   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.598557   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:43.598564   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:43.598614   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:43.634169   67607 cri.go:89] found id: ""
	I0829 20:29:43.634200   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.634210   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:43.634218   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:43.634280   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:43.670467   67607 cri.go:89] found id: ""
	I0829 20:29:43.670492   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.670500   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:43.670506   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:43.670580   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:43.706812   67607 cri.go:89] found id: ""
	I0829 20:29:43.706839   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.706849   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:43.706857   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:43.706922   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:43.741577   67607 cri.go:89] found id: ""
	I0829 20:29:43.741606   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.741612   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:43.741620   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:43.741700   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:43.776552   67607 cri.go:89] found id: ""
	I0829 20:29:43.776595   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.776625   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:43.776635   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:43.776701   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:43.816229   67607 cri.go:89] found id: ""
	I0829 20:29:43.816264   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.816274   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:43.816281   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:43.816346   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:40.850705   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:42.850904   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:40.709080   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:43.207700   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:41.994216   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:44.492986   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:46.494171   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:43.860726   67607 cri.go:89] found id: ""
	I0829 20:29:43.860753   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.860761   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:43.860768   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:43.860783   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:43.874311   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:43.874340   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:43.952243   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:43.952272   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:43.952288   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:44.032276   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:44.032312   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:44.075537   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:44.075571   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:46.632798   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:46.645878   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:46.645948   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:46.683682   67607 cri.go:89] found id: ""
	I0829 20:29:46.683711   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.683720   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:46.683726   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:46.683775   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:46.727985   67607 cri.go:89] found id: ""
	I0829 20:29:46.728012   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.728024   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:46.728031   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:46.728090   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:46.762142   67607 cri.go:89] found id: ""
	I0829 20:29:46.762166   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.762174   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:46.762180   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:46.762226   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:46.802423   67607 cri.go:89] found id: ""
	I0829 20:29:46.802453   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.802464   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:46.802471   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:46.802515   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:46.840382   67607 cri.go:89] found id: ""
	I0829 20:29:46.840411   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.840418   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:46.840425   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:46.840473   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:46.878438   67607 cri.go:89] found id: ""
	I0829 20:29:46.878466   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.878476   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:46.878483   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:46.878562   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:46.913589   67607 cri.go:89] found id: ""
	I0829 20:29:46.913618   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.913625   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:46.913631   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:46.913678   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:46.948894   67607 cri.go:89] found id: ""
	I0829 20:29:46.948922   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.948929   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:46.948938   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:46.948949   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:47.005709   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:47.005745   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:47.030316   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:47.030343   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:47.105899   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:47.105920   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:47.105932   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:47.189405   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:47.189442   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:45.352639   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:47.850647   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:45.709140   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:48.207411   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:48.994239   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:51.493287   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:49.727745   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:49.742061   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:49.742131   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:49.777428   67607 cri.go:89] found id: ""
	I0829 20:29:49.777456   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.777464   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:49.777471   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:49.777531   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:49.811611   67607 cri.go:89] found id: ""
	I0829 20:29:49.811639   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.811646   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:49.811653   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:49.811709   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:49.844962   67607 cri.go:89] found id: ""
	I0829 20:29:49.844987   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.844995   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:49.845006   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:49.845062   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:49.880259   67607 cri.go:89] found id: ""
	I0829 20:29:49.880286   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.880297   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:49.880305   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:49.880366   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:49.915889   67607 cri.go:89] found id: ""
	I0829 20:29:49.915918   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.915926   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:49.915932   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:49.915988   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:49.953146   67607 cri.go:89] found id: ""
	I0829 20:29:49.953174   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.953182   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:49.953189   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:49.953240   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:49.990689   67607 cri.go:89] found id: ""
	I0829 20:29:49.990721   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.990730   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:49.990738   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:49.990792   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:50.024775   67607 cri.go:89] found id: ""
	I0829 20:29:50.024806   67607 logs.go:276] 0 containers: []
	W0829 20:29:50.024817   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:50.024827   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:50.024842   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:50.079030   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:50.079064   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:50.093178   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:50.093205   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:50.171476   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:50.171499   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:50.171512   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:50.252913   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:50.252946   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:52.799818   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:52.812857   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:52.812930   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:52.850736   67607 cri.go:89] found id: ""
	I0829 20:29:52.850761   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.850770   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:52.850777   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:52.850834   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:52.888892   67607 cri.go:89] found id: ""
	I0829 20:29:52.888916   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.888923   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:52.888929   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:52.888975   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:52.925390   67607 cri.go:89] found id: ""
	I0829 20:29:52.925418   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.925428   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:52.925435   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:52.925501   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:52.960329   67607 cri.go:89] found id: ""
	I0829 20:29:52.960352   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.960360   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:52.960366   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:52.960413   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:52.994899   67607 cri.go:89] found id: ""
	I0829 20:29:52.994927   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.994935   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:52.994941   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:52.994995   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:53.033028   67607 cri.go:89] found id: ""
	I0829 20:29:53.033057   67607 logs.go:276] 0 containers: []
	W0829 20:29:53.033068   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:53.033076   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:53.033136   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:53.068353   67607 cri.go:89] found id: ""
	I0829 20:29:53.068381   67607 logs.go:276] 0 containers: []
	W0829 20:29:53.068389   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:53.068394   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:53.068441   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:53.104496   67607 cri.go:89] found id: ""
	I0829 20:29:53.104524   67607 logs.go:276] 0 containers: []
	W0829 20:29:53.104534   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:53.104545   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:53.104560   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:53.175777   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:53.175810   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:53.175827   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:53.257362   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:53.257396   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:53.295822   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:53.295850   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:53.351237   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:53.351263   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:49.851324   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:52.350768   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:50.707986   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:53.206918   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:53.494828   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:55.994443   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:55.864680   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:55.879324   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:55.879391   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:55.914454   67607 cri.go:89] found id: ""
	I0829 20:29:55.914479   67607 logs.go:276] 0 containers: []
	W0829 20:29:55.914490   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:55.914498   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:55.914592   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:55.953778   67607 cri.go:89] found id: ""
	I0829 20:29:55.953804   67607 logs.go:276] 0 containers: []
	W0829 20:29:55.953814   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:55.953821   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:55.953883   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:55.994659   67607 cri.go:89] found id: ""
	I0829 20:29:55.994681   67607 logs.go:276] 0 containers: []
	W0829 20:29:55.994689   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:55.994697   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:55.994768   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:56.031262   67607 cri.go:89] found id: ""
	I0829 20:29:56.031288   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.031299   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:56.031306   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:56.031366   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:56.063748   67607 cri.go:89] found id: ""
	I0829 20:29:56.063776   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.063785   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:56.063793   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:56.063883   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:56.098024   67607 cri.go:89] found id: ""
	I0829 20:29:56.098060   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.098068   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:56.098074   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:56.098127   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:56.141340   67607 cri.go:89] found id: ""
	I0829 20:29:56.141364   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.141374   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:56.141381   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:56.141440   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:56.176668   67607 cri.go:89] found id: ""
	I0829 20:29:56.176696   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.176707   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:56.176717   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:56.176731   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:56.216294   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:56.216322   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:56.269404   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:56.269440   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:56.283134   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:56.283160   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:56.355005   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:56.355023   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:56.355035   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:54.851658   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:57.350247   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:55.207477   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:57.708007   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:58.493689   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:00.998990   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:58.937406   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:58.950924   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:58.950981   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:58.986748   67607 cri.go:89] found id: ""
	I0829 20:29:58.986778   67607 logs.go:276] 0 containers: []
	W0829 20:29:58.986788   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:58.986795   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:58.986861   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:59.023737   67607 cri.go:89] found id: ""
	I0829 20:29:59.023763   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.023773   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:59.023780   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:59.023840   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:59.060245   67607 cri.go:89] found id: ""
	I0829 20:29:59.060274   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.060284   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:59.060291   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:59.060352   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:59.102467   67607 cri.go:89] found id: ""
	I0829 20:29:59.102493   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.102501   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:59.102507   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:59.102581   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:59.142601   67607 cri.go:89] found id: ""
	I0829 20:29:59.142625   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.142634   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:59.142647   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:59.142717   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:59.186683   67607 cri.go:89] found id: ""
	I0829 20:29:59.186707   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.186715   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:59.186723   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:59.186783   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:59.232104   67607 cri.go:89] found id: ""
	I0829 20:29:59.232136   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.232154   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:59.232162   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:59.232227   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:59.276416   67607 cri.go:89] found id: ""
	I0829 20:29:59.276442   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.276452   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:59.276462   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:59.276479   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:59.341741   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:59.341779   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:59.357312   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:59.357336   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:59.425653   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:59.425674   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:59.425689   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:59.505365   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:59.505403   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:02.049195   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:02.064558   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:02.064641   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:02.102141   67607 cri.go:89] found id: ""
	I0829 20:30:02.102188   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.102209   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:02.102217   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:02.102282   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:02.138610   67607 cri.go:89] found id: ""
	I0829 20:30:02.138640   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.138650   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:02.138658   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:02.138724   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:02.175391   67607 cri.go:89] found id: ""
	I0829 20:30:02.175423   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.175435   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:02.175442   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:02.175505   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:02.212956   67607 cri.go:89] found id: ""
	I0829 20:30:02.212981   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.212991   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:02.212998   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:02.213059   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:02.254444   67607 cri.go:89] found id: ""
	I0829 20:30:02.254467   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.254475   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:02.254481   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:02.254568   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:02.293232   67607 cri.go:89] found id: ""
	I0829 20:30:02.293260   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.293270   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:02.293277   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:02.293348   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:02.328300   67607 cri.go:89] found id: ""
	I0829 20:30:02.328329   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.328339   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:02.328346   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:02.328407   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:02.363467   67607 cri.go:89] found id: ""
	I0829 20:30:02.363495   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.363505   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:02.363514   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:02.363528   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:02.414357   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:02.414394   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:02.428229   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:02.428259   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:02.503640   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:02.503661   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:02.503674   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:02.584052   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:02.584087   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:59.352485   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:01.850334   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:59.717029   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:02.208354   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:03.494326   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:05.494833   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:05.124345   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:05.143530   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:05.143594   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:05.195985   67607 cri.go:89] found id: ""
	I0829 20:30:05.196014   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.196024   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:05.196032   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:05.196092   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:05.254315   67607 cri.go:89] found id: ""
	I0829 20:30:05.254343   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.254354   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:05.254362   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:05.254432   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:05.306756   67607 cri.go:89] found id: ""
	I0829 20:30:05.306781   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.306788   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:05.306794   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:05.306852   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:05.345200   67607 cri.go:89] found id: ""
	I0829 20:30:05.345225   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.345235   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:05.345242   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:05.345297   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:05.384038   67607 cri.go:89] found id: ""
	I0829 20:30:05.384064   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.384074   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:05.384081   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:05.384140   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:05.420177   67607 cri.go:89] found id: ""
	I0829 20:30:05.420201   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.420208   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:05.420214   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:05.420260   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:05.453492   67607 cri.go:89] found id: ""
	I0829 20:30:05.453513   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.453521   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:05.453526   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:05.453573   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:05.491591   67607 cri.go:89] found id: ""
	I0829 20:30:05.491618   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.491628   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:05.491638   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:05.491701   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:05.580458   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:05.580503   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:05.620137   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:05.620169   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:05.672137   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:05.672177   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:05.685946   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:05.685973   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:05.755176   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:08.256255   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:08.269099   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:08.269160   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:08.302552   67607 cri.go:89] found id: ""
	I0829 20:30:08.302578   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.302585   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:08.302591   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:08.302639   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:08.340683   67607 cri.go:89] found id: ""
	I0829 20:30:08.340711   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.340718   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:08.340726   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:08.340778   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:08.387389   67607 cri.go:89] found id: ""
	I0829 20:30:08.387416   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.387424   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:08.387430   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:08.387477   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:08.421303   67607 cri.go:89] found id: ""
	I0829 20:30:08.421330   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.421340   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:08.421348   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:08.421409   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:08.458648   67607 cri.go:89] found id: ""
	I0829 20:30:08.458677   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.458688   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:08.458695   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:08.458758   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:08.498748   67607 cri.go:89] found id: ""
	I0829 20:30:08.498776   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.498784   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:08.498790   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:08.498845   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:08.536859   67607 cri.go:89] found id: ""
	I0829 20:30:08.536889   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.536896   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:08.536902   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:08.536963   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:08.570685   67607 cri.go:89] found id: ""
	I0829 20:30:08.570713   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.570723   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:08.570734   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:08.570748   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:08.621904   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:08.621938   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:08.636367   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:08.636391   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:08.703796   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:08.703824   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:08.703838   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:08.785084   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:08.785120   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:04.350230   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:06.849598   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:08.850961   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:04.708012   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:07.206604   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:09.207368   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:07.993015   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:09.994043   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:11.326633   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:11.339570   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:11.339637   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:11.374132   67607 cri.go:89] found id: ""
	I0829 20:30:11.374155   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.374163   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:11.374169   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:11.374234   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:11.409004   67607 cri.go:89] found id: ""
	I0829 20:30:11.409036   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.409047   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:11.409054   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:11.409119   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:11.444598   67607 cri.go:89] found id: ""
	I0829 20:30:11.444625   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.444635   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:11.444643   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:11.444704   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:11.481912   67607 cri.go:89] found id: ""
	I0829 20:30:11.481942   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.481953   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:11.481961   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:11.482025   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:11.516436   67607 cri.go:89] found id: ""
	I0829 20:30:11.516466   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.516477   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:11.516483   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:11.516536   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:11.554762   67607 cri.go:89] found id: ""
	I0829 20:30:11.554787   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.554795   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:11.554801   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:11.554857   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:11.588902   67607 cri.go:89] found id: ""
	I0829 20:30:11.588931   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.588942   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:11.588950   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:11.589011   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:11.621346   67607 cri.go:89] found id: ""
	I0829 20:30:11.621368   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.621376   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:11.621383   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:11.621395   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:11.659671   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:11.659703   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:11.711288   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:11.711315   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:11.725285   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:11.725310   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:11.801713   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:11.801735   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:11.801750   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:10.851075   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:13.349510   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:11.208203   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:13.706599   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:12.494548   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:14.993188   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:14.382313   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:14.395852   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:14.395926   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:14.438735   67607 cri.go:89] found id: ""
	I0829 20:30:14.438762   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.438772   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:14.438778   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:14.438840   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:14.477886   67607 cri.go:89] found id: ""
	I0829 20:30:14.477928   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.477937   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:14.477943   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:14.478000   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:14.517627   67607 cri.go:89] found id: ""
	I0829 20:30:14.517654   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.517664   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:14.517670   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:14.517734   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:14.557247   67607 cri.go:89] found id: ""
	I0829 20:30:14.557272   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.557280   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:14.557286   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:14.557345   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:14.591364   67607 cri.go:89] found id: ""
	I0829 20:30:14.591388   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.591398   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:14.591406   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:14.591468   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:14.627517   67607 cri.go:89] found id: ""
	I0829 20:30:14.627539   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.627546   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:14.627551   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:14.627604   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:14.662388   67607 cri.go:89] found id: ""
	I0829 20:30:14.662409   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.662419   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:14.662432   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:14.662488   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:14.695277   67607 cri.go:89] found id: ""
	I0829 20:30:14.695307   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.695316   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:14.695324   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:14.695335   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:14.735824   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:14.735852   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:14.792607   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:14.792642   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:14.808881   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:14.808910   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:14.879804   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:14.879824   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:14.879837   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:17.459817   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:17.474813   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:17.474887   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:17.509885   67607 cri.go:89] found id: ""
	I0829 20:30:17.509913   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.509923   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:17.509930   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:17.509987   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:17.543931   67607 cri.go:89] found id: ""
	I0829 20:30:17.543959   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.543968   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:17.543973   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:17.544021   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:17.580944   67607 cri.go:89] found id: ""
	I0829 20:30:17.580972   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.580980   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:17.580986   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:17.581033   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:17.620061   67607 cri.go:89] found id: ""
	I0829 20:30:17.620088   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.620097   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:17.620103   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:17.620148   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:17.658675   67607 cri.go:89] found id: ""
	I0829 20:30:17.658706   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.658717   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:17.658724   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:17.658788   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:17.694424   67607 cri.go:89] found id: ""
	I0829 20:30:17.694453   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.694462   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:17.694467   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:17.694571   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:17.727425   67607 cri.go:89] found id: ""
	I0829 20:30:17.727450   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.727456   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:17.727462   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:17.727510   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:17.767915   67607 cri.go:89] found id: ""
	I0829 20:30:17.767946   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.767956   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:17.767965   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:17.767977   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:17.837556   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:17.837580   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:17.837593   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:17.921601   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:17.921638   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:17.960999   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:17.961026   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:18.013654   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:18.013691   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:15.351372   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:17.850896   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:16.206810   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:18.207702   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:16.993566   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:18.997786   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:21.493705   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:20.528244   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:20.542116   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:20.542190   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:20.578905   67607 cri.go:89] found id: ""
	I0829 20:30:20.578936   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.578947   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:20.578954   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:20.579003   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:20.613543   67607 cri.go:89] found id: ""
	I0829 20:30:20.613567   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.613574   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:20.613579   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:20.613627   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:20.649322   67607 cri.go:89] found id: ""
	I0829 20:30:20.649344   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.649352   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:20.649366   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:20.649429   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:20.684851   67607 cri.go:89] found id: ""
	I0829 20:30:20.684878   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.684886   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:20.684892   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:20.684950   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:20.722016   67607 cri.go:89] found id: ""
	I0829 20:30:20.722045   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.722054   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:20.722062   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:20.722125   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:20.757594   67607 cri.go:89] found id: ""
	I0829 20:30:20.757626   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.757637   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:20.757644   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:20.757707   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:20.793694   67607 cri.go:89] found id: ""
	I0829 20:30:20.793728   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.793738   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:20.793746   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:20.793812   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:20.829709   67607 cri.go:89] found id: ""
	I0829 20:30:20.829736   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.829747   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:20.829758   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:20.829782   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:20.888838   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:20.888888   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:20.903530   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:20.903556   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:20.972460   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:20.972488   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:20.972503   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:21.055556   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:21.055593   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:23.597355   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:23.611091   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:23.611162   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:23.649469   67607 cri.go:89] found id: ""
	I0829 20:30:23.649493   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.649501   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:23.649510   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:23.649562   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:23.684530   67607 cri.go:89] found id: ""
	I0829 20:30:23.684554   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.684561   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:23.684571   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:23.684625   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:23.720466   67607 cri.go:89] found id: ""
	I0829 20:30:23.720493   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.720503   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:23.720510   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:23.720563   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:23.755013   67607 cri.go:89] found id: ""
	I0829 20:30:23.755042   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.755053   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:23.755061   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:23.755127   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:23.795212   67607 cri.go:89] found id: ""
	I0829 20:30:23.795243   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.795254   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:23.795263   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:23.795320   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:20.349781   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:22.350157   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:20.707723   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:23.206214   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:23.994457   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:26.493771   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:23.832912   67607 cri.go:89] found id: ""
	I0829 20:30:23.832941   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.832951   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:23.832959   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:23.833015   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:23.869896   67607 cri.go:89] found id: ""
	I0829 20:30:23.869930   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.869939   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:23.869947   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:23.870011   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:23.908111   67607 cri.go:89] found id: ""
	I0829 20:30:23.908136   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.908145   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:23.908155   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:23.908170   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:23.988489   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:23.988510   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:23.988525   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:24.063246   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:24.063280   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:24.102943   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:24.102974   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:24.157255   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:24.157294   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:26.671966   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:26.684755   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:26.684830   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:26.721125   67607 cri.go:89] found id: ""
	I0829 20:30:26.721150   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.721158   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:26.721164   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:26.721219   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:26.756328   67607 cri.go:89] found id: ""
	I0829 20:30:26.756349   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.756356   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:26.756362   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:26.756420   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:26.791711   67607 cri.go:89] found id: ""
	I0829 20:30:26.791751   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.791763   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:26.791774   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:26.791857   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:26.827215   67607 cri.go:89] found id: ""
	I0829 20:30:26.827244   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.827254   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:26.827261   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:26.827321   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:26.863461   67607 cri.go:89] found id: ""
	I0829 20:30:26.863486   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.863497   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:26.863505   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:26.863569   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:26.900037   67607 cri.go:89] found id: ""
	I0829 20:30:26.900065   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.900075   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:26.900083   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:26.900139   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:26.937236   67607 cri.go:89] found id: ""
	I0829 20:30:26.937263   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.937274   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:26.937282   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:26.937340   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:26.970281   67607 cri.go:89] found id: ""
	I0829 20:30:26.970312   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.970322   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:26.970332   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:26.970345   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:27.041485   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:27.041511   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:27.041526   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:27.120774   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:27.120807   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:27.159656   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:27.159685   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:27.213322   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:27.213356   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:24.350464   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:26.351419   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:28.850079   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:25.207838   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:27.708107   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:28.993552   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:31.494259   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:29.729066   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:29.742044   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:29.742099   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:29.777426   67607 cri.go:89] found id: ""
	I0829 20:30:29.777454   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.777462   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:29.777468   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:29.777529   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:29.814353   67607 cri.go:89] found id: ""
	I0829 20:30:29.814381   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.814392   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:29.814401   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:29.814462   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:29.853754   67607 cri.go:89] found id: ""
	I0829 20:30:29.853783   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.853793   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:29.853801   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:29.853869   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:29.893966   67607 cri.go:89] found id: ""
	I0829 20:30:29.893991   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.893998   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:29.894003   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:29.894057   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:29.929452   67607 cri.go:89] found id: ""
	I0829 20:30:29.929483   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.929492   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:29.929502   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:29.929561   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:29.965880   67607 cri.go:89] found id: ""
	I0829 20:30:29.965906   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.965916   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:29.965924   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:29.965986   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:30.002192   67607 cri.go:89] found id: ""
	I0829 20:30:30.002226   67607 logs.go:276] 0 containers: []
	W0829 20:30:30.002237   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:30.002245   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:30.002320   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:30.037603   67607 cri.go:89] found id: ""
	I0829 20:30:30.037640   67607 logs.go:276] 0 containers: []
	W0829 20:30:30.037651   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:30.037662   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:30.037677   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:30.094128   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:30.094168   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:30.110667   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:30.110701   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:30.188355   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:30.188375   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:30.188388   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:30.270750   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:30.270785   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:32.809472   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:32.823099   67607 kubeadm.go:597] duration metric: took 4m3.15684598s to restartPrimaryControlPlane
	W0829 20:30:32.823188   67607 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 20:30:32.823224   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:30:33.322987   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:30:33.338134   67607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:30:33.348586   67607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:30:33.358672   67607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:30:33.358692   67607 kubeadm.go:157] found existing configuration files:
	
	I0829 20:30:33.358748   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:30:33.367955   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:30:33.368000   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:30:33.377565   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:30:33.386317   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:30:33.386377   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:30:33.396356   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:30:33.406228   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:30:33.406281   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:30:33.418323   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:30:33.427595   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:30:33.427657   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:30:33.437520   67607 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:30:33.511159   67607 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 20:30:33.511279   67607 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:30:33.669988   67607 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:30:33.670133   67607 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:30:33.670267   67607 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 20:30:33.859908   67607 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:30:30.850893   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:32.851574   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:30.207012   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:32.206405   66989 pod_ready.go:82] duration metric: took 4m0.005864609s for pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace to be "Ready" ...
	E0829 20:30:32.206426   66989 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0829 20:30:32.206433   66989 pod_ready.go:39] duration metric: took 4m5.570928284s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:30:32.206448   66989 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:30:32.206482   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:32.206528   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:32.260213   66989 cri.go:89] found id: "f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:32.260242   66989 cri.go:89] found id: ""
	I0829 20:30:32.260252   66989 logs.go:276] 1 containers: [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313]
	I0829 20:30:32.260314   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.265201   66989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:32.265276   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:32.307620   66989 cri.go:89] found id: "5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:32.307648   66989 cri.go:89] found id: ""
	I0829 20:30:32.307656   66989 logs.go:276] 1 containers: [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6]
	I0829 20:30:32.307701   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.312372   66989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:32.312430   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:32.350059   66989 cri.go:89] found id: "64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:32.350092   66989 cri.go:89] found id: ""
	I0829 20:30:32.350102   66989 logs.go:276] 1 containers: [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71]
	I0829 20:30:32.350158   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.354624   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:32.354681   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:32.393968   66989 cri.go:89] found id: "daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:32.393988   66989 cri.go:89] found id: ""
	I0829 20:30:32.393995   66989 logs.go:276] 1 containers: [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334]
	I0829 20:30:32.394039   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.398674   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:32.398745   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:32.433038   66989 cri.go:89] found id: "05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:32.433064   66989 cri.go:89] found id: ""
	I0829 20:30:32.433074   66989 logs.go:276] 1 containers: [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f]
	I0829 20:30:32.433118   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.436969   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:32.437028   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:32.472768   66989 cri.go:89] found id: "29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:32.472786   66989 cri.go:89] found id: ""
	I0829 20:30:32.472793   66989 logs.go:276] 1 containers: [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd]
	I0829 20:30:32.472842   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.477466   66989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:32.477536   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:32.514464   66989 cri.go:89] found id: ""
	I0829 20:30:32.514492   66989 logs.go:276] 0 containers: []
	W0829 20:30:32.514502   66989 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:32.514509   66989 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0829 20:30:32.514591   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0829 20:30:32.551429   66989 cri.go:89] found id: "668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:32.551452   66989 cri.go:89] found id: "585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:32.551456   66989 cri.go:89] found id: ""
	I0829 20:30:32.551463   66989 logs.go:276] 2 containers: [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523]
	I0829 20:30:32.551508   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.555697   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.559864   66989 logs.go:123] Gathering logs for kube-apiserver [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313] ...
	I0829 20:30:32.559883   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:32.609776   66989 logs.go:123] Gathering logs for coredns [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71] ...
	I0829 20:30:32.609803   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:32.648419   66989 logs.go:123] Gathering logs for kube-scheduler [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334] ...
	I0829 20:30:32.648446   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:32.685938   66989 logs.go:123] Gathering logs for storage-provisioner [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c] ...
	I0829 20:30:32.685969   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:32.728665   66989 logs.go:123] Gathering logs for container status ...
	I0829 20:30:32.728693   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:32.770030   66989 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:32.770068   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 20:30:32.907821   66989 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:32.907850   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:32.923119   66989 logs.go:123] Gathering logs for etcd [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6] ...
	I0829 20:30:32.923149   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:32.979819   66989 logs.go:123] Gathering logs for kube-proxy [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f] ...
	I0829 20:30:32.979853   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:33.020472   66989 logs.go:123] Gathering logs for kube-controller-manager [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd] ...
	I0829 20:30:33.020496   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:33.074802   66989 logs.go:123] Gathering logs for storage-provisioner [585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523] ...
	I0829 20:30:33.074838   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:33.112043   66989 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:33.112072   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:33.624274   66989 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:33.624316   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:33.861742   67607 out.go:235]   - Generating certificates and keys ...
	I0829 20:30:33.861849   67607 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:30:33.861946   67607 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:30:33.862075   67607 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:30:33.862174   67607 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:30:33.862276   67607 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:30:33.862366   67607 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:30:33.862467   67607 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:30:33.862573   67607 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:30:33.862794   67607 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:30:33.863226   67607 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:30:33.863323   67607 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:30:33.863417   67607 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:30:34.065914   67607 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:30:34.235581   67607 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:30:34.660452   67607 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:30:34.724718   67607 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:30:34.743897   67607 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:30:34.746263   67607 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:30:34.746369   67607 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:30:34.893824   67607 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:30:33.494825   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:35.994300   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:34.895805   67607 out.go:235]   - Booting up control plane ...
	I0829 20:30:34.895941   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:30:34.904294   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:30:34.915103   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:30:34.915744   67607 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:30:34.917923   67607 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 20:30:35.351975   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:37.352013   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:36.202184   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:36.218838   66989 api_server.go:72] duration metric: took 4m17.334186395s to wait for apiserver process to appear ...
	I0829 20:30:36.218870   66989 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:30:36.218910   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:36.218963   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:36.263205   66989 cri.go:89] found id: "f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:36.263233   66989 cri.go:89] found id: ""
	I0829 20:30:36.263243   66989 logs.go:276] 1 containers: [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313]
	I0829 20:30:36.263292   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.267466   66989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:36.267522   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:36.303894   66989 cri.go:89] found id: "5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:36.303930   66989 cri.go:89] found id: ""
	I0829 20:30:36.303938   66989 logs.go:276] 1 containers: [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6]
	I0829 20:30:36.303996   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.308089   66989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:36.308170   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:36.347320   66989 cri.go:89] found id: "64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:36.347392   66989 cri.go:89] found id: ""
	I0829 20:30:36.347414   66989 logs.go:276] 1 containers: [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71]
	I0829 20:30:36.347485   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.352121   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:36.352174   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:36.389760   66989 cri.go:89] found id: "daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:36.389784   66989 cri.go:89] found id: ""
	I0829 20:30:36.389793   66989 logs.go:276] 1 containers: [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334]
	I0829 20:30:36.389853   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.394860   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:36.394919   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:36.430562   66989 cri.go:89] found id: "05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:36.430587   66989 cri.go:89] found id: ""
	I0829 20:30:36.430597   66989 logs.go:276] 1 containers: [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f]
	I0829 20:30:36.430655   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.435151   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:36.435226   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:36.470714   66989 cri.go:89] found id: "29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:36.470742   66989 cri.go:89] found id: ""
	I0829 20:30:36.470750   66989 logs.go:276] 1 containers: [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd]
	I0829 20:30:36.470816   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.475382   66989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:36.475446   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:36.514853   66989 cri.go:89] found id: ""
	I0829 20:30:36.514888   66989 logs.go:276] 0 containers: []
	W0829 20:30:36.514898   66989 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:36.514910   66989 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0829 20:30:36.514971   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0829 20:30:36.548229   66989 cri.go:89] found id: "668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:36.548252   66989 cri.go:89] found id: "585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:36.548256   66989 cri.go:89] found id: ""
	I0829 20:30:36.548263   66989 logs.go:276] 2 containers: [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523]
	I0829 20:30:36.548314   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.552484   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.556661   66989 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:36.556681   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:36.622985   66989 logs.go:123] Gathering logs for etcd [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6] ...
	I0829 20:30:36.623019   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:36.678770   66989 logs.go:123] Gathering logs for kube-controller-manager [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd] ...
	I0829 20:30:36.678799   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:36.731822   66989 logs.go:123] Gathering logs for storage-provisioner [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c] ...
	I0829 20:30:36.731849   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:36.768451   66989 logs.go:123] Gathering logs for storage-provisioner [585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523] ...
	I0829 20:30:36.768482   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:36.803818   66989 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:36.803846   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:37.225805   66989 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:37.225849   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:37.245421   66989 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:37.245458   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 20:30:37.358238   66989 logs.go:123] Gathering logs for kube-apiserver [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313] ...
	I0829 20:30:37.358266   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:37.401876   66989 logs.go:123] Gathering logs for coredns [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71] ...
	I0829 20:30:37.401913   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:37.438189   66989 logs.go:123] Gathering logs for kube-scheduler [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334] ...
	I0829 20:30:37.438223   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:37.475404   66989 logs.go:123] Gathering logs for kube-proxy [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f] ...
	I0829 20:30:37.475433   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:37.511876   66989 logs.go:123] Gathering logs for container status ...
	I0829 20:30:37.511903   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:38.493604   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:40.494396   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:40.054097   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:30:40.058474   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0829 20:30:40.059830   66989 api_server.go:141] control plane version: v1.31.0
	I0829 20:30:40.059850   66989 api_server.go:131] duration metric: took 3.840972907s to wait for apiserver health ...
	I0829 20:30:40.059857   66989 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:30:40.059877   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:40.059924   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:40.101978   66989 cri.go:89] found id: "f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:40.102003   66989 cri.go:89] found id: ""
	I0829 20:30:40.102013   66989 logs.go:276] 1 containers: [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313]
	I0829 20:30:40.102073   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.107429   66989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:40.107496   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:40.145052   66989 cri.go:89] found id: "5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:40.145078   66989 cri.go:89] found id: ""
	I0829 20:30:40.145086   66989 logs.go:276] 1 containers: [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6]
	I0829 20:30:40.145133   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.149329   66989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:40.149394   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:40.187740   66989 cri.go:89] found id: "64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:40.187769   66989 cri.go:89] found id: ""
	I0829 20:30:40.187778   66989 logs.go:276] 1 containers: [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71]
	I0829 20:30:40.187838   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.192085   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:40.192156   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:40.231992   66989 cri.go:89] found id: "daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:40.232010   66989 cri.go:89] found id: ""
	I0829 20:30:40.232017   66989 logs.go:276] 1 containers: [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334]
	I0829 20:30:40.232060   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.236275   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:40.236333   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:40.279637   66989 cri.go:89] found id: "05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:40.279660   66989 cri.go:89] found id: ""
	I0829 20:30:40.279669   66989 logs.go:276] 1 containers: [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f]
	I0829 20:30:40.279727   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.288800   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:40.288876   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:40.341222   66989 cri.go:89] found id: "29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:40.341248   66989 cri.go:89] found id: ""
	I0829 20:30:40.341258   66989 logs.go:276] 1 containers: [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd]
	I0829 20:30:40.341322   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.346013   66989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:40.346088   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:40.383801   66989 cri.go:89] found id: ""
	I0829 20:30:40.383828   66989 logs.go:276] 0 containers: []
	W0829 20:30:40.383836   66989 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:40.383842   66989 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0829 20:30:40.383896   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0829 20:30:40.421847   66989 cri.go:89] found id: "668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:40.421874   66989 cri.go:89] found id: "585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:40.421879   66989 cri.go:89] found id: ""
	I0829 20:30:40.421889   66989 logs.go:276] 2 containers: [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523]
	I0829 20:30:40.421950   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.426229   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.429902   66989 logs.go:123] Gathering logs for storage-provisioner [585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523] ...
	I0829 20:30:40.429931   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:40.471015   66989 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:40.471039   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:40.831575   66989 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:40.831612   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:40.846195   66989 logs.go:123] Gathering logs for etcd [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6] ...
	I0829 20:30:40.846230   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:40.905469   66989 logs.go:123] Gathering logs for kube-scheduler [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334] ...
	I0829 20:30:40.905507   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:40.952303   66989 logs.go:123] Gathering logs for kube-proxy [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f] ...
	I0829 20:30:40.952337   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:41.001278   66989 logs.go:123] Gathering logs for kube-controller-manager [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd] ...
	I0829 20:30:41.001309   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:41.071045   66989 logs.go:123] Gathering logs for container status ...
	I0829 20:30:41.071089   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:41.120024   66989 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:41.120050   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:41.191412   66989 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:41.191445   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 20:30:41.321848   66989 logs.go:123] Gathering logs for kube-apiserver [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313] ...
	I0829 20:30:41.321874   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:41.370807   66989 logs.go:123] Gathering logs for coredns [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71] ...
	I0829 20:30:41.370833   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:41.405913   66989 logs.go:123] Gathering logs for storage-provisioner [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c] ...
	I0829 20:30:41.405939   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:43.948957   66989 system_pods.go:59] 8 kube-system pods found
	I0829 20:30:43.948987   66989 system_pods.go:61] "coredns-6f6b679f8f-dg6t6" [92e89b20-ebf4-4738-8ca7-9dc2a0e5653a] Running
	I0829 20:30:43.948992   66989 system_pods.go:61] "etcd-embed-certs-388383" [a688325a-9ed2-488d-a1a1-aa440e37fa9f] Running
	I0829 20:30:43.948996   66989 system_pods.go:61] "kube-apiserver-embed-certs-388383" [7a1b715b-87a3-44e0-868d-a3184f5b9f61] Running
	I0829 20:30:43.948999   66989 system_pods.go:61] "kube-controller-manager-embed-certs-388383" [9d942083-4d39-448c-8151-424ea9d5e6af] Running
	I0829 20:30:43.949003   66989 system_pods.go:61] "kube-proxy-fcxs4" [649b40c8-4f4b-40d1-8179-baf378d4c7d7] Running
	I0829 20:30:43.949006   66989 system_pods.go:61] "kube-scheduler-embed-certs-388383" [87b73013-dfad-411d-aaa9-f2c0e39fb920] Running
	I0829 20:30:43.949011   66989 system_pods.go:61] "metrics-server-6867b74b74-mx5jh" [99e21acd-b7b8-4e6f-8c75-c112206aed89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:30:43.949015   66989 system_pods.go:61] "storage-provisioner" [021ca156-b7a8-4647-8efe-db17968fd5a8] Running
	I0829 20:30:43.949022   66989 system_pods.go:74] duration metric: took 3.889159839s to wait for pod list to return data ...
	I0829 20:30:43.949028   66989 default_sa.go:34] waiting for default service account to be created ...
	I0829 20:30:43.951906   66989 default_sa.go:45] found service account: "default"
	I0829 20:30:43.951932   66989 default_sa.go:55] duration metric: took 2.897769ms for default service account to be created ...
	I0829 20:30:43.951943   66989 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 20:30:43.959246   66989 system_pods.go:86] 8 kube-system pods found
	I0829 20:30:43.959269   66989 system_pods.go:89] "coredns-6f6b679f8f-dg6t6" [92e89b20-ebf4-4738-8ca7-9dc2a0e5653a] Running
	I0829 20:30:43.959275   66989 system_pods.go:89] "etcd-embed-certs-388383" [a688325a-9ed2-488d-a1a1-aa440e37fa9f] Running
	I0829 20:30:43.959279   66989 system_pods.go:89] "kube-apiserver-embed-certs-388383" [7a1b715b-87a3-44e0-868d-a3184f5b9f61] Running
	I0829 20:30:43.959283   66989 system_pods.go:89] "kube-controller-manager-embed-certs-388383" [9d942083-4d39-448c-8151-424ea9d5e6af] Running
	I0829 20:30:43.959286   66989 system_pods.go:89] "kube-proxy-fcxs4" [649b40c8-4f4b-40d1-8179-baf378d4c7d7] Running
	I0829 20:30:43.959290   66989 system_pods.go:89] "kube-scheduler-embed-certs-388383" [87b73013-dfad-411d-aaa9-f2c0e39fb920] Running
	I0829 20:30:43.959296   66989 system_pods.go:89] "metrics-server-6867b74b74-mx5jh" [99e21acd-b7b8-4e6f-8c75-c112206aed89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:30:43.959302   66989 system_pods.go:89] "storage-provisioner" [021ca156-b7a8-4647-8efe-db17968fd5a8] Running
	I0829 20:30:43.959309   66989 system_pods.go:126] duration metric: took 7.361244ms to wait for k8s-apps to be running ...
	I0829 20:30:43.959318   66989 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 20:30:43.959356   66989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:30:43.976136   66989 system_svc.go:56] duration metric: took 16.811475ms WaitForService to wait for kubelet
	I0829 20:30:43.976167   66989 kubeadm.go:582] duration metric: took 4m25.091518378s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:30:43.976193   66989 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:30:43.979345   66989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:30:43.979376   66989 node_conditions.go:123] node cpu capacity is 2
	I0829 20:30:43.979386   66989 node_conditions.go:105] duration metric: took 3.187489ms to run NodePressure ...
	I0829 20:30:43.979396   66989 start.go:241] waiting for startup goroutines ...
	I0829 20:30:43.979402   66989 start.go:246] waiting for cluster config update ...
	I0829 20:30:43.979414   66989 start.go:255] writing updated cluster config ...
	I0829 20:30:43.979729   66989 ssh_runner.go:195] Run: rm -f paused
	I0829 20:30:44.028715   66989 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 20:30:44.030675   66989 out.go:177] * Done! kubectl is now configured to use "embed-certs-388383" cluster and "default" namespace by default
	I0829 20:30:39.850811   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:41.850941   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:42.993711   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:45.492729   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:44.351171   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:46.849842   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:48.851125   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:47.494031   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:49.993291   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:51.350926   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:53.850966   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:52.494604   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:54.994054   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:56.350237   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:58.856068   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:56.994483   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:59.494879   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:01.351293   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:03.850415   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:01.994470   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:04.493393   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:05.851663   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:08.350513   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:06.988349   68084 pod_ready.go:82] duration metric: took 4m0.000994859s for pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace to be "Ready" ...
	E0829 20:31:06.988378   68084 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 20:31:06.988396   68084 pod_ready.go:39] duration metric: took 4m13.5587561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:31:06.988421   68084 kubeadm.go:597] duration metric: took 4m20.63419422s to restartPrimaryControlPlane
	W0829 20:31:06.988470   68084 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 20:31:06.988492   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:31:10.350782   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:12.851120   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:14.919490   67607 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 20:31:14.920124   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:14.920395   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:15.350794   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:17.351675   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:19.920740   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:19.920993   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:19.858714   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:22.351208   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:24.851679   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:27.351087   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:33.177614   68084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.189095849s)
	I0829 20:31:33.177712   68084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:31:33.202840   68084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:31:33.220648   68084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:31:33.239458   68084 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:31:33.239479   68084 kubeadm.go:157] found existing configuration files:
	
	I0829 20:31:33.239519   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0829 20:31:33.257831   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:31:33.257900   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:31:33.272621   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0829 20:31:33.287906   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:31:33.287975   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:31:33.302931   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0829 20:31:33.312359   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:31:33.312411   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:31:33.322850   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0829 20:31:33.332224   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:31:33.332280   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:31:33.342072   68084 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:31:33.388790   68084 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 20:31:33.388844   68084 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:31:33.506108   68084 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:31:33.506263   68084 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:31:33.506403   68084 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 20:31:33.515467   68084 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:31:29.921355   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:29.921591   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:29.351212   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:31.351683   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:33.850337   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:33.517487   68084 out.go:235]   - Generating certificates and keys ...
	I0829 20:31:33.517590   68084 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:31:33.517697   68084 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:31:33.517809   68084 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:31:33.517907   68084 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:31:33.518009   68084 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:31:33.518086   68084 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:31:33.518174   68084 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:31:33.518266   68084 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:31:33.518379   68084 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:31:33.518495   68084 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:31:33.518567   68084 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:31:33.518656   68084 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:31:33.888310   68084 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:31:34.000803   68084 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 20:31:34.103016   68084 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:31:34.461677   68084 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:31:34.617814   68084 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:31:34.618316   68084 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:31:34.622440   68084 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:31:34.624324   68084 out.go:235]   - Booting up control plane ...
	I0829 20:31:34.624428   68084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:31:34.624527   68084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:31:34.624882   68084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:31:34.647388   68084 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:31:34.653776   68084 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:31:34.653864   68084 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:31:34.795338   68084 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 20:31:34.795463   68084 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 20:31:35.797126   68084 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001854627s
	I0829 20:31:35.797253   68084 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 20:31:35.852495   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:37.344608   66841 pod_ready.go:82] duration metric: took 4m0.000461851s for pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace to be "Ready" ...
	E0829 20:31:37.344637   66841 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 20:31:37.344661   66841 pod_ready.go:39] duration metric: took 4m13.033970527s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:31:37.344693   66841 kubeadm.go:597] duration metric: took 4m20.095743839s to restartPrimaryControlPlane
	W0829 20:31:37.344752   66841 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 20:31:37.344780   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:31:40.799092   68084 kubeadm.go:310] [api-check] The API server is healthy after 5.002121632s
	I0829 20:31:40.813865   68084 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 20:31:40.829677   68084 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 20:31:40.870324   68084 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 20:31:40.870598   68084 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-145096 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 20:31:40.889024   68084 kubeadm.go:310] [bootstrap-token] Using token: gy9sl5.6oyya9sd2gbep67e
	I0829 20:31:40.890947   68084 out.go:235]   - Configuring RBAC rules ...
	I0829 20:31:40.891083   68084 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 20:31:40.898748   68084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 20:31:40.912914   68084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 20:31:40.916739   68084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 20:31:40.923995   68084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 20:31:40.930447   68084 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 20:31:41.206632   68084 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 20:31:41.679673   68084 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 20:31:42.206707   68084 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 20:31:42.206733   68084 kubeadm.go:310] 
	I0829 20:31:42.206819   68084 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 20:31:42.206830   68084 kubeadm.go:310] 
	I0829 20:31:42.206974   68084 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 20:31:42.206996   68084 kubeadm.go:310] 
	I0829 20:31:42.207018   68084 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 20:31:42.207073   68084 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 20:31:42.207120   68084 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 20:31:42.207127   68084 kubeadm.go:310] 
	I0829 20:31:42.207189   68084 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 20:31:42.207196   68084 kubeadm.go:310] 
	I0829 20:31:42.207234   68084 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 20:31:42.207238   68084 kubeadm.go:310] 
	I0829 20:31:42.207285   68084 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 20:31:42.207382   68084 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 20:31:42.207473   68084 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 20:31:42.207484   68084 kubeadm.go:310] 
	I0829 20:31:42.207611   68084 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 20:31:42.207727   68084 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 20:31:42.207736   68084 kubeadm.go:310] 
	I0829 20:31:42.207854   68084 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token gy9sl5.6oyya9sd2gbep67e \
	I0829 20:31:42.207962   68084 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef \
	I0829 20:31:42.207983   68084 kubeadm.go:310] 	--control-plane 
	I0829 20:31:42.207986   68084 kubeadm.go:310] 
	I0829 20:31:42.208087   68084 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 20:31:42.208106   68084 kubeadm.go:310] 
	I0829 20:31:42.208214   68084 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token gy9sl5.6oyya9sd2gbep67e \
	I0829 20:31:42.208342   68084 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef 
	I0829 20:31:42.209248   68084 kubeadm.go:310] W0829 20:31:33.349141    2513 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 20:31:42.209595   68084 kubeadm.go:310] W0829 20:31:33.349919    2513 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 20:31:42.209769   68084 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:31:42.209803   68084 cni.go:84] Creating CNI manager for ""
	I0829 20:31:42.209817   68084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:31:42.211545   68084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:31:42.212889   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:31:42.223984   68084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:31:42.242703   68084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 20:31:42.242779   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-145096 minikube.k8s.io/updated_at=2024_08_29T20_31_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033 minikube.k8s.io/name=default-k8s-diff-port-145096 minikube.k8s.io/primary=true
	I0829 20:31:42.242779   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:42.448824   68084 ops.go:34] apiserver oom_adj: -16
	I0829 20:31:42.453004   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:42.953891   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:43.453922   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:43.953465   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:44.453647   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:44.954035   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:45.453660   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:45.953536   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:46.046900   68084 kubeadm.go:1113] duration metric: took 3.804195127s to wait for elevateKubeSystemPrivileges
	I0829 20:31:46.046927   68084 kubeadm.go:394] duration metric: took 4m59.74590678s to StartCluster
	I0829 20:31:46.046947   68084 settings.go:142] acquiring lock: {Name:mka4cd5ddff5796cd0ca11509c181178f4f73529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:31:46.047046   68084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:31:46.048617   68084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:31:46.048876   68084 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 20:31:46.048979   68084 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 20:31:46.049063   68084 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-145096"
	I0829 20:31:46.049090   68084 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-145096"
	I0829 20:31:46.049090   68084 config.go:182] Loaded profile config "default-k8s-diff-port-145096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:31:46.049099   68084 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-145096"
	I0829 20:31:46.049136   68084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-145096"
	W0829 20:31:46.049143   68084 addons.go:243] addon storage-provisioner should already be in state true
	I0829 20:31:46.049174   68084 host.go:66] Checking if "default-k8s-diff-port-145096" exists ...
	I0829 20:31:46.049104   68084 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-145096"
	I0829 20:31:46.049264   68084 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-145096"
	W0829 20:31:46.049280   68084 addons.go:243] addon metrics-server should already be in state true
	I0829 20:31:46.049335   68084 host.go:66] Checking if "default-k8s-diff-port-145096" exists ...
	I0829 20:31:46.049569   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.049574   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.049595   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.049599   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.049698   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.049722   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.050441   68084 out.go:177] * Verifying Kubernetes components...
	I0829 20:31:46.052039   68084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:31:46.065735   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39367
	I0829 20:31:46.065909   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32931
	I0829 20:31:46.066241   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.066344   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.066900   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.066918   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.067024   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.067045   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.067438   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.067481   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.067665   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:31:46.067902   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.067931   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.069157   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41005
	I0829 20:31:46.070637   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.070757   68084 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-145096"
	W0829 20:31:46.070771   68084 addons.go:243] addon default-storageclass should already be in state true
	I0829 20:31:46.070803   68084 host.go:66] Checking if "default-k8s-diff-port-145096" exists ...
	I0829 20:31:46.071118   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.071124   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.071132   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.071155   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.071510   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.072052   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.072095   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.085524   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39387
	I0829 20:31:46.085987   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.086553   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.086576   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.086966   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.087138   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:31:46.087202   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43235
	I0829 20:31:46.087621   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.088358   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.088381   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.088708   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.088806   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:31:46.089193   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.089363   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.090878   68084 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:31:46.091571   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42413
	I0829 20:31:46.092208   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.092291   68084 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:31:46.092316   68084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 20:31:46.092337   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:31:46.092660   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.092687   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.093044   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.093230   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:31:46.095184   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:31:46.096265   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.096792   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:31:46.096821   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.097088   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:31:46.097274   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:31:46.097433   68084 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 20:31:46.097448   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:31:46.097645   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:31:46.098681   68084 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 20:31:46.098697   68084 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 20:31:46.098715   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:31:46.101604   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.101993   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:31:46.102014   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.102328   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:31:46.102529   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:31:46.102687   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:31:46.102847   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:31:46.108154   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32805
	I0829 20:31:46.108627   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.109111   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.109129   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.109446   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.109675   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:31:46.111174   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:31:46.111440   68084 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 20:31:46.111452   68084 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 20:31:46.111469   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:31:46.114302   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.114805   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:31:46.114832   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.114921   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:31:46.115102   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:31:46.115256   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:31:46.115400   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:31:46.277748   68084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:31:46.297001   68084 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-145096" to be "Ready" ...
	I0829 20:31:46.317473   68084 node_ready.go:49] node "default-k8s-diff-port-145096" has status "Ready":"True"
	I0829 20:31:46.317498   68084 node_ready.go:38] duration metric: took 20.469679ms for node "default-k8s-diff-port-145096" to be "Ready" ...
	I0829 20:31:46.317509   68084 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:31:46.332180   68084 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:46.393588   68084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:31:46.399404   68084 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 20:31:46.399428   68084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 20:31:46.453014   68084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 20:31:46.460100   68084 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 20:31:46.460126   68084 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 20:31:46.541980   68084 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:31:46.542002   68084 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 20:31:46.607148   68084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:31:47.296344   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.296370   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.296445   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.296471   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.296678   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.296722   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.296744   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.296764   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.298376   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:47.298379   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.298404   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.298412   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:47.298420   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.298436   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.298453   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.298464   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.298700   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.298726   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:47.298729   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.318720   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.318745   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.319031   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:47.319053   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.319069   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.870171   68084 pod_ready.go:93] pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:31:47.870198   68084 pod_ready.go:82] duration metric: took 1.537994965s for pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:47.870208   68084 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:48.057308   68084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.450120563s)
	I0829 20:31:48.057358   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:48.057371   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:48.057667   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:48.057722   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:48.057734   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:48.057747   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:48.057759   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:48.057989   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:48.058005   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:48.058021   68084 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-145096"
	I0829 20:31:48.059886   68084 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0829 20:31:48.061124   68084 addons.go:510] duration metric: took 2.012141801s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0829 20:31:48.875874   68084 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:31:48.875897   68084 pod_ready.go:82] duration metric: took 1.005682325s for pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:48.875912   68084 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:48.879828   68084 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:31:48.879846   68084 pod_ready.go:82] duration metric: took 3.928263ms for pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:48.879863   68084 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:50.886764   68084 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:49.922318   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:49.922554   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:52.887708   68084 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:55.387571   68084 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:55.886194   68084 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:31:55.886217   68084 pod_ready.go:82] duration metric: took 7.006347256s for pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:55.886225   68084 pod_ready.go:39] duration metric: took 9.568704494s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:31:55.886238   68084 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:31:55.886286   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:31:55.901604   68084 api_server.go:72] duration metric: took 9.852691692s to wait for apiserver process to appear ...
	I0829 20:31:55.901628   68084 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:31:55.901643   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:31:55.905564   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 200:
	ok
	I0829 20:31:55.906387   68084 api_server.go:141] control plane version: v1.31.0
	I0829 20:31:55.906406   68084 api_server.go:131] duration metric: took 4.772472ms to wait for apiserver health ...
	I0829 20:31:55.906413   68084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:31:55.911423   68084 system_pods.go:59] 9 kube-system pods found
	I0829 20:31:55.911444   68084 system_pods.go:61] "coredns-6f6b679f8f-l25kd" [86947930-0d47-407a-b876-b482596fbe8f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:31:55.911451   68084 system_pods.go:61] "coredns-6f6b679f8f-lnm92" [a6caefe0-e883-4460-87de-25ee97191e1a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:31:55.911458   68084 system_pods.go:61] "etcd-default-k8s-diff-port-145096" [caba3f17-6544-4fe0-8dd3-0dd95e8df8ce] Running
	I0829 20:31:55.911465   68084 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-145096" [9b1ca00a-613b-414f-81e9-601d53d43207] Running
	I0829 20:31:55.911470   68084 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-145096" [e7145779-85cf-458d-9870-6fda4853d29d] Running
	I0829 20:31:55.911479   68084 system_pods.go:61] "kube-proxy-ptswc" [96c01414-e8e8-4731-824b-11d636285fb3] Running
	I0829 20:31:55.911488   68084 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-145096" [0d2cc607-72ac-4417-8a7c-196bf3ec90d7] Running
	I0829 20:31:55.911495   68084 system_pods.go:61] "metrics-server-6867b74b74-6sdqg" [2c9efadb-89bb-4aa6-b0f0-ddcb3e931674] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:31:55.911503   68084 system_pods.go:61] "storage-provisioner" [81531989-d045-44fb-b1a1-0817af27c804] Running
	I0829 20:31:55.911512   68084 system_pods.go:74] duration metric: took 5.092824ms to wait for pod list to return data ...
	I0829 20:31:55.911523   68084 default_sa.go:34] waiting for default service account to be created ...
	I0829 20:31:55.913794   68084 default_sa.go:45] found service account: "default"
	I0829 20:31:55.913820   68084 default_sa.go:55] duration metric: took 2.286925ms for default service account to be created ...
	I0829 20:31:55.913830   68084 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 20:31:55.919628   68084 system_pods.go:86] 9 kube-system pods found
	I0829 20:31:55.919666   68084 system_pods.go:89] "coredns-6f6b679f8f-l25kd" [86947930-0d47-407a-b876-b482596fbe8f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:31:55.919677   68084 system_pods.go:89] "coredns-6f6b679f8f-lnm92" [a6caefe0-e883-4460-87de-25ee97191e1a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:31:55.919686   68084 system_pods.go:89] "etcd-default-k8s-diff-port-145096" [caba3f17-6544-4fe0-8dd3-0dd95e8df8ce] Running
	I0829 20:31:55.919693   68084 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-145096" [9b1ca00a-613b-414f-81e9-601d53d43207] Running
	I0829 20:31:55.919699   68084 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-145096" [e7145779-85cf-458d-9870-6fda4853d29d] Running
	I0829 20:31:55.919704   68084 system_pods.go:89] "kube-proxy-ptswc" [96c01414-e8e8-4731-824b-11d636285fb3] Running
	I0829 20:31:55.919710   68084 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-145096" [0d2cc607-72ac-4417-8a7c-196bf3ec90d7] Running
	I0829 20:31:55.919718   68084 system_pods.go:89] "metrics-server-6867b74b74-6sdqg" [2c9efadb-89bb-4aa6-b0f0-ddcb3e931674] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:31:55.919725   68084 system_pods.go:89] "storage-provisioner" [81531989-d045-44fb-b1a1-0817af27c804] Running
	I0829 20:31:55.919734   68084 system_pods.go:126] duration metric: took 5.897752ms to wait for k8s-apps to be running ...
	I0829 20:31:55.919745   68084 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 20:31:55.919800   68084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:31:55.935429   68084 system_svc.go:56] duration metric: took 15.676316ms WaitForService to wait for kubelet
	I0829 20:31:55.935460   68084 kubeadm.go:582] duration metric: took 9.886551311s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:31:55.935483   68084 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:31:55.938444   68084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:31:55.938466   68084 node_conditions.go:123] node cpu capacity is 2
	I0829 20:31:55.938476   68084 node_conditions.go:105] duration metric: took 2.988434ms to run NodePressure ...
	I0829 20:31:55.938486   68084 start.go:241] waiting for startup goroutines ...
	I0829 20:31:55.938493   68084 start.go:246] waiting for cluster config update ...
	I0829 20:31:55.938503   68084 start.go:255] writing updated cluster config ...
	I0829 20:31:55.938834   68084 ssh_runner.go:195] Run: rm -f paused
	I0829 20:31:55.987879   68084 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 20:31:55.989766   68084 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-145096" cluster and "default" namespace by default
	I0829 20:32:03.506190   66841 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.161387814s)
	I0829 20:32:03.506268   66841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:32:03.530660   66841 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:32:03.550784   66841 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:32:03.565054   66841 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:32:03.565085   66841 kubeadm.go:157] found existing configuration files:
	
	I0829 20:32:03.565131   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:32:03.586492   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:32:03.586577   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:32:03.605061   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:32:03.617990   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:32:03.618054   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:32:03.635587   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:32:03.645495   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:32:03.645559   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:32:03.655081   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:32:03.664640   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:32:03.664703   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:32:03.674097   66841 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:32:03.721087   66841 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 20:32:03.721155   66841 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:32:03.839829   66841 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:32:03.839985   66841 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:32:03.840079   66841 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 20:32:03.849047   66841 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:32:03.850883   66841 out.go:235]   - Generating certificates and keys ...
	I0829 20:32:03.850970   66841 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:32:03.851045   66841 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:32:03.851129   66841 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:32:03.851222   66841 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:32:03.851292   66841 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:32:03.851340   66841 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:32:03.851399   66841 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:32:03.851450   66841 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:32:03.851515   66841 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:32:03.851620   66841 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:32:03.851687   66841 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:32:03.851755   66841 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:32:03.968189   66841 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:32:04.253016   66841 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 20:32:04.341190   66841 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:32:04.491607   66841 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:32:04.616753   66841 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:32:04.617354   66841 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:32:04.619961   66841 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:32:04.621690   66841 out.go:235]   - Booting up control plane ...
	I0829 20:32:04.621799   66841 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:32:04.621910   66841 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:32:04.622021   66841 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:32:04.643758   66841 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:32:04.650541   66841 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:32:04.650612   66841 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:32:04.786596   66841 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 20:32:04.786755   66841 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 20:32:05.788381   66841 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001614523s
	I0829 20:32:05.788512   66841 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 20:32:10.789752   66841 kubeadm.go:310] [api-check] The API server is healthy after 5.001571241s
	I0829 20:32:10.803237   66841 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 20:32:10.822640   66841 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 20:32:10.845744   66841 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 20:32:10.846050   66841 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-397724 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 20:32:10.856315   66841 kubeadm.go:310] [bootstrap-token] Using token: 3k2s43.7gy6mzkt91kkied7
	I0829 20:32:10.857834   66841 out.go:235]   - Configuring RBAC rules ...
	I0829 20:32:10.857947   66841 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 20:32:10.867339   66841 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 20:32:10.876522   66841 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 20:32:10.879786   66841 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 20:32:10.885043   66841 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 20:32:10.892077   66841 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 20:32:11.196796   66841 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 20:32:11.630072   66841 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 20:32:12.200197   66841 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 20:32:12.200232   66841 kubeadm.go:310] 
	I0829 20:32:12.200314   66841 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 20:32:12.200326   66841 kubeadm.go:310] 
	I0829 20:32:12.200406   66841 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 20:32:12.200416   66841 kubeadm.go:310] 
	I0829 20:32:12.200450   66841 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 20:32:12.200536   66841 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 20:32:12.200606   66841 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 20:32:12.200616   66841 kubeadm.go:310] 
	I0829 20:32:12.200687   66841 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 20:32:12.200700   66841 kubeadm.go:310] 
	I0829 20:32:12.200744   66841 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 20:32:12.200750   66841 kubeadm.go:310] 
	I0829 20:32:12.200793   66841 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 20:32:12.200861   66841 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 20:32:12.200918   66841 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 20:32:12.200924   66841 kubeadm.go:310] 
	I0829 20:32:12.201048   66841 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 20:32:12.201144   66841 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 20:32:12.201152   66841 kubeadm.go:310] 
	I0829 20:32:12.201255   66841 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3k2s43.7gy6mzkt91kkied7 \
	I0829 20:32:12.201373   66841 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef \
	I0829 20:32:12.201400   66841 kubeadm.go:310] 	--control-plane 
	I0829 20:32:12.201411   66841 kubeadm.go:310] 
	I0829 20:32:12.201487   66841 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 20:32:12.201495   66841 kubeadm.go:310] 
	I0829 20:32:12.201574   66841 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3k2s43.7gy6mzkt91kkied7 \
	I0829 20:32:12.201710   66841 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef 
	I0829 20:32:12.202900   66841 kubeadm.go:310] W0829 20:32:03.691334    3057 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 20:32:12.203223   66841 kubeadm.go:310] W0829 20:32:03.692151    3057 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 20:32:12.203339   66841 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:32:12.203366   66841 cni.go:84] Creating CNI manager for ""
	I0829 20:32:12.203381   66841 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:32:12.205733   66841 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:32:12.206905   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:32:12.218121   66841 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:32:12.237885   66841 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 20:32:12.237989   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:12.238006   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-397724 minikube.k8s.io/updated_at=2024_08_29T20_32_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033 minikube.k8s.io/name=no-preload-397724 minikube.k8s.io/primary=true
	I0829 20:32:12.282191   66841 ops.go:34] apiserver oom_adj: -16
	I0829 20:32:12.430006   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:12.930327   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:13.430210   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:13.930065   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:14.430163   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:14.930189   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:15.430677   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:15.930670   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:16.430943   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:16.549095   66841 kubeadm.go:1113] duration metric: took 4.311165714s to wait for elevateKubeSystemPrivileges
	I0829 20:32:16.549136   66841 kubeadm.go:394] duration metric: took 4m59.355577107s to StartCluster
	I0829 20:32:16.549156   66841 settings.go:142] acquiring lock: {Name:mka4cd5ddff5796cd0ca11509c181178f4f73529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:32:16.549229   66841 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:32:16.550926   66841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:32:16.551141   66841 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 20:32:16.551202   66841 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 20:32:16.551291   66841 addons.go:69] Setting storage-provisioner=true in profile "no-preload-397724"
	I0829 20:32:16.551315   66841 addons.go:69] Setting default-storageclass=true in profile "no-preload-397724"
	I0829 20:32:16.551329   66841 config.go:182] Loaded profile config "no-preload-397724": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:32:16.551340   66841 addons.go:69] Setting metrics-server=true in profile "no-preload-397724"
	I0829 20:32:16.551389   66841 addons.go:234] Setting addon metrics-server=true in "no-preload-397724"
	W0829 20:32:16.551404   66841 addons.go:243] addon metrics-server should already be in state true
	I0829 20:32:16.551442   66841 host.go:66] Checking if "no-preload-397724" exists ...
	I0829 20:32:16.551360   66841 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-397724"
	I0829 20:32:16.551324   66841 addons.go:234] Setting addon storage-provisioner=true in "no-preload-397724"
	W0829 20:32:16.551673   66841 addons.go:243] addon storage-provisioner should already be in state true
	I0829 20:32:16.551705   66841 host.go:66] Checking if "no-preload-397724" exists ...
	I0829 20:32:16.551872   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.551873   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.551908   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.551929   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.552036   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.552065   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.552634   66841 out.go:177] * Verifying Kubernetes components...
	I0829 20:32:16.553973   66841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:32:16.567797   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43335
	I0829 20:32:16.568321   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.568884   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.568910   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.569328   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.569941   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.569978   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.573055   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40673
	I0829 20:32:16.573642   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36399
	I0829 20:32:16.573770   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.574303   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.574321   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.574394   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.574913   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.574933   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.574935   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.575471   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.575511   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.575724   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.575950   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:32:16.579912   66841 addons.go:234] Setting addon default-storageclass=true in "no-preload-397724"
	W0829 20:32:16.579932   66841 addons.go:243] addon default-storageclass should already be in state true
	I0829 20:32:16.579960   66841 host.go:66] Checking if "no-preload-397724" exists ...
	I0829 20:32:16.580281   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.580298   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.591264   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42469
	I0829 20:32:16.591442   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42753
	I0829 20:32:16.591777   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.591827   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.592275   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.592289   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.592289   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.592307   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.592702   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.592726   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.592881   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:32:16.592882   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:32:16.594494   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:32:16.594956   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:32:16.596431   66841 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:32:16.596433   66841 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 20:32:16.597503   66841 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 20:32:16.597524   66841 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 20:32:16.597547   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:32:16.597607   66841 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:32:16.597625   66841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 20:32:16.597641   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:32:16.598780   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32841
	I0829 20:32:16.599272   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.599915   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.599937   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.601210   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.601613   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.601965   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.602159   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:32:16.602190   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.602328   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.602867   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.602998   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:32:16.603188   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:32:16.603234   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:32:16.603287   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.603434   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:32:16.603487   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:32:16.603691   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:32:16.603708   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:32:16.603857   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:32:16.603977   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:32:16.619336   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37683
	I0829 20:32:16.619806   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.620269   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.620286   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.620604   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.620818   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:32:16.622348   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:32:16.622563   66841 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 20:32:16.622580   66841 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 20:32:16.622597   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:32:16.625203   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.625542   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:32:16.625570   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.625746   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:32:16.625934   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:32:16.626094   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:32:16.626266   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:32:16.787525   66841 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:32:16.817674   66841 node_ready.go:35] waiting up to 6m0s for node "no-preload-397724" to be "Ready" ...
	I0829 20:32:16.833992   66841 node_ready.go:49] node "no-preload-397724" has status "Ready":"True"
	I0829 20:32:16.834030   66841 node_ready.go:38] duration metric: took 16.322874ms for node "no-preload-397724" to be "Ready" ...
	I0829 20:32:16.834042   66841 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:32:16.843147   66841 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-crgtj" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:16.902589   66841 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 20:32:16.902613   66841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 20:32:16.902859   66841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 20:32:16.903193   66841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:32:16.922497   66841 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 20:32:16.922518   66841 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 20:32:16.966207   66841 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:32:16.966240   66841 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 20:32:17.004882   66841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:32:17.204576   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.204613   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.204968   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.204987   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:17.204995   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.204994   66841 main.go:141] libmachine: (no-preload-397724) DBG | Closing plugin on server side
	I0829 20:32:17.205002   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.205261   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.205278   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:17.211789   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.211811   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.212074   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.212089   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:17.212119   66841 main.go:141] libmachine: (no-preload-397724) DBG | Closing plugin on server side
	I0829 20:32:17.902866   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.902897   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.903218   66841 main.go:141] libmachine: (no-preload-397724) DBG | Closing plugin on server side
	I0829 20:32:17.903266   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.903278   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:17.903286   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.903296   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.903556   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.903572   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:18.344211   66841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.33928059s)
	I0829 20:32:18.344259   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:18.344274   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:18.344571   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:18.344589   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:18.344611   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:18.344626   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:18.344948   66841 main.go:141] libmachine: (no-preload-397724) DBG | Closing plugin on server side
	I0829 20:32:18.344980   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:18.345010   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:18.345025   66841 addons.go:475] Verifying addon metrics-server=true in "no-preload-397724"
	I0829 20:32:18.346919   66841 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0829 20:32:18.348704   66841 addons.go:510] duration metric: took 1.797503952s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0829 20:32:18.850832   66841 pod_ready.go:93] pod "coredns-6f6b679f8f-crgtj" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:18.850853   66841 pod_ready.go:82] duration metric: took 2.007683093s for pod "coredns-6f6b679f8f-crgtj" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:18.850862   66841 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dw2r7" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.357679   66841 pod_ready.go:93] pod "coredns-6f6b679f8f-dw2r7" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.357702   66841 pod_ready.go:82] duration metric: took 1.506832539s for pod "coredns-6f6b679f8f-dw2r7" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.357710   66841 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.361830   66841 pod_ready.go:93] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.361854   66841 pod_ready.go:82] duration metric: took 4.136801ms for pod "etcd-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.361865   66841 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.365719   66841 pod_ready.go:93] pod "kube-apiserver-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.365733   66841 pod_ready.go:82] duration metric: took 3.861894ms for pod "kube-apiserver-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.365741   66841 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.369596   66841 pod_ready.go:93] pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.369611   66841 pod_ready.go:82] duration metric: took 3.864669ms for pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.369619   66841 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f4x4j" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.447788   66841 pod_ready.go:93] pod "kube-proxy-f4x4j" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.447812   66841 pod_ready.go:82] duration metric: took 78.187574ms for pod "kube-proxy-f4x4j" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.447823   66841 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:22.049084   66841 pod_ready.go:93] pod "kube-scheduler-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:22.049105   66841 pod_ready.go:82] duration metric: took 1.601276793s for pod "kube-scheduler-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:22.049113   66841 pod_ready.go:39] duration metric: took 5.215058301s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:32:22.049125   66841 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:32:22.049172   66841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:32:22.066060   66841 api_server.go:72] duration metric: took 5.514888299s to wait for apiserver process to appear ...
	I0829 20:32:22.066086   66841 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:32:22.066109   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:32:22.072343   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 200:
	ok
	I0829 20:32:22.073798   66841 api_server.go:141] control plane version: v1.31.0
	I0829 20:32:22.073821   66841 api_server.go:131] duration metric: took 7.728095ms to wait for apiserver health ...
	I0829 20:32:22.073828   66841 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:32:22.252273   66841 system_pods.go:59] 9 kube-system pods found
	I0829 20:32:22.252302   66841 system_pods.go:61] "coredns-6f6b679f8f-crgtj" [c48571a8-18ae-4737-a05b-4a77736aee35] Running
	I0829 20:32:22.252309   66841 system_pods.go:61] "coredns-6f6b679f8f-dw2r7" [6edda799-e2d6-402b-b4cd-7e54b2b89ca5] Running
	I0829 20:32:22.252315   66841 system_pods.go:61] "etcd-no-preload-397724" [15473208-a76c-4bc5-810f-e78d59538493] Running
	I0829 20:32:22.252320   66841 system_pods.go:61] "kube-apiserver-no-preload-397724" [521c6041-888f-4145-aabb-54da7382953d] Running
	I0829 20:32:22.252325   66841 system_pods.go:61] "kube-controller-manager-no-preload-397724" [fd5afaf8-898d-4985-8efc-5628709a52cd] Running
	I0829 20:32:22.252329   66841 system_pods.go:61] "kube-proxy-f4x4j" [eb76dc5a-016a-416c-8880-f76fc2d2a9bb] Running
	I0829 20:32:22.252333   66841 system_pods.go:61] "kube-scheduler-no-preload-397724" [77d9e2de-ee8e-4cb2-a7f0-5d9b96bd9691] Running
	I0829 20:32:22.252342   66841 system_pods.go:61] "metrics-server-6867b74b74-nxdc5" [6061e81d-2f14-4c4a-9e0f-acb57dc9fb5a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:32:22.252348   66841 system_pods.go:61] "storage-provisioner" [8b6c02d6-7a39-4fea-80b4-4ba02904232c] Running
	I0829 20:32:22.252358   66841 system_pods.go:74] duration metric: took 178.523887ms to wait for pod list to return data ...
	I0829 20:32:22.252370   66841 default_sa.go:34] waiting for default service account to be created ...
	I0829 20:32:22.448475   66841 default_sa.go:45] found service account: "default"
	I0829 20:32:22.448499   66841 default_sa.go:55] duration metric: took 196.123693ms for default service account to be created ...
	I0829 20:32:22.448508   66841 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 20:32:22.650996   66841 system_pods.go:86] 9 kube-system pods found
	I0829 20:32:22.651023   66841 system_pods.go:89] "coredns-6f6b679f8f-crgtj" [c48571a8-18ae-4737-a05b-4a77736aee35] Running
	I0829 20:32:22.651029   66841 system_pods.go:89] "coredns-6f6b679f8f-dw2r7" [6edda799-e2d6-402b-b4cd-7e54b2b89ca5] Running
	I0829 20:32:22.651033   66841 system_pods.go:89] "etcd-no-preload-397724" [15473208-a76c-4bc5-810f-e78d59538493] Running
	I0829 20:32:22.651037   66841 system_pods.go:89] "kube-apiserver-no-preload-397724" [521c6041-888f-4145-aabb-54da7382953d] Running
	I0829 20:32:22.651042   66841 system_pods.go:89] "kube-controller-manager-no-preload-397724" [fd5afaf8-898d-4985-8efc-5628709a52cd] Running
	I0829 20:32:22.651045   66841 system_pods.go:89] "kube-proxy-f4x4j" [eb76dc5a-016a-416c-8880-f76fc2d2a9bb] Running
	I0829 20:32:22.651048   66841 system_pods.go:89] "kube-scheduler-no-preload-397724" [77d9e2de-ee8e-4cb2-a7f0-5d9b96bd9691] Running
	I0829 20:32:22.651054   66841 system_pods.go:89] "metrics-server-6867b74b74-nxdc5" [6061e81d-2f14-4c4a-9e0f-acb57dc9fb5a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:32:22.651058   66841 system_pods.go:89] "storage-provisioner" [8b6c02d6-7a39-4fea-80b4-4ba02904232c] Running
	I0829 20:32:22.651065   66841 system_pods.go:126] duration metric: took 202.552304ms to wait for k8s-apps to be running ...
	I0829 20:32:22.651071   66841 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 20:32:22.651111   66841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:32:22.666831   66841 system_svc.go:56] duration metric: took 15.753046ms WaitForService to wait for kubelet
	I0829 20:32:22.666863   66841 kubeadm.go:582] duration metric: took 6.115692499s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:32:22.666888   66841 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:32:22.848742   66841 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:32:22.848766   66841 node_conditions.go:123] node cpu capacity is 2
	I0829 20:32:22.848777   66841 node_conditions.go:105] duration metric: took 181.884368ms to run NodePressure ...
	I0829 20:32:22.848787   66841 start.go:241] waiting for startup goroutines ...
	I0829 20:32:22.848794   66841 start.go:246] waiting for cluster config update ...
	I0829 20:32:22.848803   66841 start.go:255] writing updated cluster config ...
	I0829 20:32:22.849030   66841 ssh_runner.go:195] Run: rm -f paused
	I0829 20:32:22.897503   66841 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 20:32:22.899404   66841 out.go:177] * Done! kubectl is now configured to use "no-preload-397724" cluster and "default" namespace by default
	I0829 20:32:29.924469   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:32:29.924707   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:32:29.924729   67607 kubeadm.go:310] 
	I0829 20:32:29.924801   67607 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 20:32:29.924855   67607 kubeadm.go:310] 		timed out waiting for the condition
	I0829 20:32:29.924865   67607 kubeadm.go:310] 
	I0829 20:32:29.924912   67607 kubeadm.go:310] 	This error is likely caused by:
	I0829 20:32:29.924960   67607 kubeadm.go:310] 		- The kubelet is not running
	I0829 20:32:29.925080   67607 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 20:32:29.925090   67607 kubeadm.go:310] 
	I0829 20:32:29.925207   67607 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 20:32:29.925256   67607 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 20:32:29.925316   67607 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 20:32:29.925342   67607 kubeadm.go:310] 
	I0829 20:32:29.925493   67607 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 20:32:29.925616   67607 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 20:32:29.925627   67607 kubeadm.go:310] 
	I0829 20:32:29.925776   67607 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 20:32:29.925909   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 20:32:29.926016   67607 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 20:32:29.926134   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 20:32:29.926154   67607 kubeadm.go:310] 
	I0829 20:32:29.926605   67607 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:32:29.926723   67607 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 20:32:29.926812   67607 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0829 20:32:29.926935   67607 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0829 20:32:29.926979   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:32:30.389951   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:32:30.408455   67607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:32:30.418493   67607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:32:30.418513   67607 kubeadm.go:157] found existing configuration files:
	
	I0829 20:32:30.418582   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:32:30.427909   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:32:30.427957   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:32:30.437122   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:32:30.446157   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:32:30.446203   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:32:30.455480   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:32:30.464781   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:32:30.464834   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:32:30.474607   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:32:30.484537   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:32:30.484601   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:32:30.494170   67607 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:32:30.717349   67607 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:34:26.784436   67607 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 20:34:26.784518   67607 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0829 20:34:26.786158   67607 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 20:34:26.786196   67607 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:34:26.786276   67607 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:34:26.786353   67607 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:34:26.786437   67607 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 20:34:26.786486   67607 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:34:26.788271   67607 out.go:235]   - Generating certificates and keys ...
	I0829 20:34:26.788380   67607 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:34:26.788453   67607 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:34:26.788523   67607 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:34:26.788593   67607 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:34:26.788665   67607 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:34:26.788714   67607 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:34:26.788769   67607 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:34:26.788826   67607 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:34:26.788894   67607 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:34:26.788961   67607 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:34:26.788993   67607 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:34:26.789044   67607 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:34:26.789084   67607 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:34:26.789143   67607 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:34:26.789228   67607 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:34:26.789312   67607 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:34:26.789441   67607 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:34:26.789577   67607 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:34:26.789647   67607 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:34:26.789717   67607 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:34:26.791166   67607 out.go:235]   - Booting up control plane ...
	I0829 20:34:26.791239   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:34:26.791305   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:34:26.791382   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:34:26.791465   67607 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:34:26.791597   67607 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 20:34:26.791658   67607 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 20:34:26.791736   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.791926   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792008   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.792182   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792254   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.792435   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792492   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.792725   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792798   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.793026   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.793043   67607 kubeadm.go:310] 
	I0829 20:34:26.793091   67607 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 20:34:26.793148   67607 kubeadm.go:310] 		timed out waiting for the condition
	I0829 20:34:26.793159   67607 kubeadm.go:310] 
	I0829 20:34:26.793188   67607 kubeadm.go:310] 	This error is likely caused by:
	I0829 20:34:26.793219   67607 kubeadm.go:310] 		- The kubelet is not running
	I0829 20:34:26.793305   67607 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 20:34:26.793314   67607 kubeadm.go:310] 
	I0829 20:34:26.793438   67607 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 20:34:26.793483   67607 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 20:34:26.793515   67607 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 20:34:26.793522   67607 kubeadm.go:310] 
	I0829 20:34:26.793618   67607 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 20:34:26.793735   67607 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 20:34:26.793748   67607 kubeadm.go:310] 
	I0829 20:34:26.793895   67607 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 20:34:26.794020   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 20:34:26.794125   67607 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 20:34:26.794227   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 20:34:26.794285   67607 kubeadm.go:310] 
	I0829 20:34:26.794300   67607 kubeadm.go:394] duration metric: took 7m57.183485424s to StartCluster
	I0829 20:34:26.794357   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:34:26.794410   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:34:26.837033   67607 cri.go:89] found id: ""
	I0829 20:34:26.837072   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.837083   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:34:26.837091   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:34:26.837153   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:34:26.871177   67607 cri.go:89] found id: ""
	I0829 20:34:26.871203   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.871213   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:34:26.871220   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:34:26.871280   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:34:26.905409   67607 cri.go:89] found id: ""
	I0829 20:34:26.905432   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.905442   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:34:26.905450   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:34:26.905509   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:34:26.940119   67607 cri.go:89] found id: ""
	I0829 20:34:26.940150   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.940161   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:34:26.940169   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:34:26.940217   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:34:26.974555   67607 cri.go:89] found id: ""
	I0829 20:34:26.974589   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.974601   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:34:26.974608   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:34:26.974674   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:34:27.010586   67607 cri.go:89] found id: ""
	I0829 20:34:27.010616   67607 logs.go:276] 0 containers: []
	W0829 20:34:27.010631   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:34:27.010639   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:34:27.010704   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:34:27.044867   67607 cri.go:89] found id: ""
	I0829 20:34:27.044900   67607 logs.go:276] 0 containers: []
	W0829 20:34:27.044913   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:34:27.044921   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:34:27.044979   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:34:27.079282   67607 cri.go:89] found id: ""
	I0829 20:34:27.079308   67607 logs.go:276] 0 containers: []
	W0829 20:34:27.079316   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:34:27.079323   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:34:27.079335   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:34:27.093455   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:34:27.093485   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:34:27.179256   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:34:27.179280   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:34:27.179292   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:34:27.305873   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:34:27.305906   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:34:27.349676   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:34:27.349702   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 20:34:27.399787   67607 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0829 20:34:27.399851   67607 out.go:270] * 
	W0829 20:34:27.399907   67607 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 20:34:27.399919   67607 out.go:270] * 
	W0829 20:34:27.400631   67607 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 20:34:27.403773   67607 out.go:201] 
	W0829 20:34:27.404902   67607 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 20:34:27.404953   67607 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0829 20:34:27.404981   67607 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0829 20:34:27.406310   67607 out.go:201] 
	
	
	==> CRI-O <==
	Aug 29 20:41:24 no-preload-397724 crio[709]: time="2024-08-29 20:41:24.911380510Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964084911359949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=754a4b96-aba7-4510-b149-4d3d1492f008 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:41:24 no-preload-397724 crio[709]: time="2024-08-29 20:41:24.911858940Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2b3618e-2079-4b68-a75e-4e2582b6ef48 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:41:24 no-preload-397724 crio[709]: time="2024-08-29 20:41:24.912131442Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2b3618e-2079-4b68-a75e-4e2582b6ef48 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:41:24 no-preload-397724 crio[709]: time="2024-08-29 20:41:24.912452651Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c24672f1f33fab0a363ffe9b15191c033aad911504fe4e0cbbb0c54723fc61d,PodSandboxId:c0f16c27a76d494a7575865408ce9d80ab96b703ee14687cc914a0ad479ebdb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724963538528655210,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b6c02d6-7a39-4fea-80b4-4ba02904232c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f2452b7e2499acfc2dda22691859915ef9b81d7ffa9020ad6044fe06095263,PodSandboxId:b6a71d1267b1556242a109dff2d1c47914b3e44249c44e2e79491bfc580ab454,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963537998920202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dw2r7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6edda799-e2d6-402b-b4cd-7e54b2b89ca5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e17dc0c760cefbdf280368ae8c71df7387ad9525e6132df6f10fc3d87b5febc4,PodSandboxId:9db6339227b53906ea9bd813539fc15515997e0afaa8bffd10f0627a9c54b0d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963537921508774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-crgtj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4
8571a8-18ae-4737-a05b-4a77736aee35,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c37f561b080360811f0545e64ed61159766fde4836a414f54eb67de63ca057,PodSandboxId:bd7adf539fa384b63c54bfed8c530511359adc9cd0c7c18ae12e7f6227c93a6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724963537480521948,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f4x4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb76dc5a-016a-416c-8880-f76fc2d2a9bb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881dd3e8ca191bf84b9cc769c8e846c152f9ad7f469e4427002254e2cb7b68df,PodSandboxId:53daf6f331dc11caec7e319b2e0e1d7d90feb2ea32ead44369bdba115bda9776,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724963526065761939,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c22a6e7c8785486b291da7b93159617,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e94f96d5c169a8e81bd28b96c8f06b5aad7264b27f5af8cbff1efd443c250d2e,PodSandboxId:46d7177c94efa372b28fab29aa3b74e62e75fa40225a7124f7f226a1ef213c1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724963526016815685,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0efa7e4e455814b5713ed169940ea21d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bae575fecaf12208b623dd35e7c88b4f7b71a2552366e495103d134160e9a9,PodSandboxId:858897f5bdf7b4100e9cf511241850678dccf7b17f893ffe9ae203017fd7c2e4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724963526012860464,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ef3e1a8cc24cb25fbef1929ff100cc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195ec5617f33e9a305c0aed8715efa8f6a9dff5ccf8e7048450a4deb2876fdc0,PodSandboxId:16813450eb8a8992ef9278b776f981a88ebcec8713d93b3fc5cf2a3a5d561cf6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724963525948309522,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b09ae8907a140762fbc0f45d1cffb624,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7a88c36402647c6d49b743bc7067f62fc094cebffedd5a342d34603edb45ae,PodSandboxId:11b530f514ff60ea17a5981f4f7734ea771a8480105aec8407c552dece3f6554,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724963239022892416,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0efa7e4e455814b5713ed169940ea21d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2b3618e-2079-4b68-a75e-4e2582b6ef48 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:41:24 no-preload-397724 crio[709]: time="2024-08-29 20:41:24.949614028Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2291d8c8-ba15-4021-89b5-753032868feb name=/runtime.v1.RuntimeService/Version
	Aug 29 20:41:24 no-preload-397724 crio[709]: time="2024-08-29 20:41:24.949687761Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2291d8c8-ba15-4021-89b5-753032868feb name=/runtime.v1.RuntimeService/Version
	Aug 29 20:41:24 no-preload-397724 crio[709]: time="2024-08-29 20:41:24.950539898Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f1aa6cdd-4773-455f-a70f-776374e87e81 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:41:24 no-preload-397724 crio[709]: time="2024-08-29 20:41:24.950858525Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964084950839302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f1aa6cdd-4773-455f-a70f-776374e87e81 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:41:24 no-preload-397724 crio[709]: time="2024-08-29 20:41:24.951263777Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01ffbc5d-a717-4269-9c36-74e96b83e33b name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:41:24 no-preload-397724 crio[709]: time="2024-08-29 20:41:24.951314091Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01ffbc5d-a717-4269-9c36-74e96b83e33b name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:41:24 no-preload-397724 crio[709]: time="2024-08-29 20:41:24.951711987Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c24672f1f33fab0a363ffe9b15191c033aad911504fe4e0cbbb0c54723fc61d,PodSandboxId:c0f16c27a76d494a7575865408ce9d80ab96b703ee14687cc914a0ad479ebdb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724963538528655210,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b6c02d6-7a39-4fea-80b4-4ba02904232c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f2452b7e2499acfc2dda22691859915ef9b81d7ffa9020ad6044fe06095263,PodSandboxId:b6a71d1267b1556242a109dff2d1c47914b3e44249c44e2e79491bfc580ab454,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963537998920202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dw2r7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6edda799-e2d6-402b-b4cd-7e54b2b89ca5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e17dc0c760cefbdf280368ae8c71df7387ad9525e6132df6f10fc3d87b5febc4,PodSandboxId:9db6339227b53906ea9bd813539fc15515997e0afaa8bffd10f0627a9c54b0d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963537921508774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-crgtj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4
8571a8-18ae-4737-a05b-4a77736aee35,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c37f561b080360811f0545e64ed61159766fde4836a414f54eb67de63ca057,PodSandboxId:bd7adf539fa384b63c54bfed8c530511359adc9cd0c7c18ae12e7f6227c93a6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724963537480521948,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f4x4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb76dc5a-016a-416c-8880-f76fc2d2a9bb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881dd3e8ca191bf84b9cc769c8e846c152f9ad7f469e4427002254e2cb7b68df,PodSandboxId:53daf6f331dc11caec7e319b2e0e1d7d90feb2ea32ead44369bdba115bda9776,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724963526065761939,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c22a6e7c8785486b291da7b93159617,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e94f96d5c169a8e81bd28b96c8f06b5aad7264b27f5af8cbff1efd443c250d2e,PodSandboxId:46d7177c94efa372b28fab29aa3b74e62e75fa40225a7124f7f226a1ef213c1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724963526016815685,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0efa7e4e455814b5713ed169940ea21d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bae575fecaf12208b623dd35e7c88b4f7b71a2552366e495103d134160e9a9,PodSandboxId:858897f5bdf7b4100e9cf511241850678dccf7b17f893ffe9ae203017fd7c2e4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724963526012860464,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ef3e1a8cc24cb25fbef1929ff100cc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195ec5617f33e9a305c0aed8715efa8f6a9dff5ccf8e7048450a4deb2876fdc0,PodSandboxId:16813450eb8a8992ef9278b776f981a88ebcec8713d93b3fc5cf2a3a5d561cf6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724963525948309522,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b09ae8907a140762fbc0f45d1cffb624,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7a88c36402647c6d49b743bc7067f62fc094cebffedd5a342d34603edb45ae,PodSandboxId:11b530f514ff60ea17a5981f4f7734ea771a8480105aec8407c552dece3f6554,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724963239022892416,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0efa7e4e455814b5713ed169940ea21d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=01ffbc5d-a717-4269-9c36-74e96b83e33b name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:41:24 no-preload-397724 crio[709]: time="2024-08-29 20:41:24.990250447Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b7a4e628-de13-4684-a70d-38aec3132c09 name=/runtime.v1.RuntimeService/Version
	Aug 29 20:41:24 no-preload-397724 crio[709]: time="2024-08-29 20:41:24.990320686Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b7a4e628-de13-4684-a70d-38aec3132c09 name=/runtime.v1.RuntimeService/Version
	Aug 29 20:41:24 no-preload-397724 crio[709]: time="2024-08-29 20:41:24.991525037Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b2e2117-a8f9-4210-8b81-9a4b23fe9956 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:41:24 no-preload-397724 crio[709]: time="2024-08-29 20:41:24.991860736Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964084991839696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b2e2117-a8f9-4210-8b81-9a4b23fe9956 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:41:24 no-preload-397724 crio[709]: time="2024-08-29 20:41:24.992597868Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8506153e-6a86-48d0-addc-aa3fbe974e6b name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:41:24 no-preload-397724 crio[709]: time="2024-08-29 20:41:24.992649805Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8506153e-6a86-48d0-addc-aa3fbe974e6b name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:41:24 no-preload-397724 crio[709]: time="2024-08-29 20:41:24.992890400Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c24672f1f33fab0a363ffe9b15191c033aad911504fe4e0cbbb0c54723fc61d,PodSandboxId:c0f16c27a76d494a7575865408ce9d80ab96b703ee14687cc914a0ad479ebdb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724963538528655210,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b6c02d6-7a39-4fea-80b4-4ba02904232c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f2452b7e2499acfc2dda22691859915ef9b81d7ffa9020ad6044fe06095263,PodSandboxId:b6a71d1267b1556242a109dff2d1c47914b3e44249c44e2e79491bfc580ab454,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963537998920202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dw2r7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6edda799-e2d6-402b-b4cd-7e54b2b89ca5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e17dc0c760cefbdf280368ae8c71df7387ad9525e6132df6f10fc3d87b5febc4,PodSandboxId:9db6339227b53906ea9bd813539fc15515997e0afaa8bffd10f0627a9c54b0d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963537921508774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-crgtj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4
8571a8-18ae-4737-a05b-4a77736aee35,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c37f561b080360811f0545e64ed61159766fde4836a414f54eb67de63ca057,PodSandboxId:bd7adf539fa384b63c54bfed8c530511359adc9cd0c7c18ae12e7f6227c93a6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724963537480521948,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f4x4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb76dc5a-016a-416c-8880-f76fc2d2a9bb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881dd3e8ca191bf84b9cc769c8e846c152f9ad7f469e4427002254e2cb7b68df,PodSandboxId:53daf6f331dc11caec7e319b2e0e1d7d90feb2ea32ead44369bdba115bda9776,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724963526065761939,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c22a6e7c8785486b291da7b93159617,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e94f96d5c169a8e81bd28b96c8f06b5aad7264b27f5af8cbff1efd443c250d2e,PodSandboxId:46d7177c94efa372b28fab29aa3b74e62e75fa40225a7124f7f226a1ef213c1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724963526016815685,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0efa7e4e455814b5713ed169940ea21d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bae575fecaf12208b623dd35e7c88b4f7b71a2552366e495103d134160e9a9,PodSandboxId:858897f5bdf7b4100e9cf511241850678dccf7b17f893ffe9ae203017fd7c2e4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724963526012860464,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ef3e1a8cc24cb25fbef1929ff100cc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195ec5617f33e9a305c0aed8715efa8f6a9dff5ccf8e7048450a4deb2876fdc0,PodSandboxId:16813450eb8a8992ef9278b776f981a88ebcec8713d93b3fc5cf2a3a5d561cf6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724963525948309522,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b09ae8907a140762fbc0f45d1cffb624,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7a88c36402647c6d49b743bc7067f62fc094cebffedd5a342d34603edb45ae,PodSandboxId:11b530f514ff60ea17a5981f4f7734ea771a8480105aec8407c552dece3f6554,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724963239022892416,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0efa7e4e455814b5713ed169940ea21d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8506153e-6a86-48d0-addc-aa3fbe974e6b name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:41:25 no-preload-397724 crio[709]: time="2024-08-29 20:41:25.036883327Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=32eb49d4-6503-45f0-8355-093f08db536d name=/runtime.v1.RuntimeService/Version
	Aug 29 20:41:25 no-preload-397724 crio[709]: time="2024-08-29 20:41:25.037039427Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=32eb49d4-6503-45f0-8355-093f08db536d name=/runtime.v1.RuntimeService/Version
	Aug 29 20:41:25 no-preload-397724 crio[709]: time="2024-08-29 20:41:25.038188254Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a1a865fe-a901-4d0f-b331-5662aaacb9eb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:41:25 no-preload-397724 crio[709]: time="2024-08-29 20:41:25.038518082Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964085038498437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1a865fe-a901-4d0f-b331-5662aaacb9eb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:41:25 no-preload-397724 crio[709]: time="2024-08-29 20:41:25.039076989Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=326ec9bb-ff9f-49aa-8a6c-bbbc0c658f6e name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:41:25 no-preload-397724 crio[709]: time="2024-08-29 20:41:25.039148852Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=326ec9bb-ff9f-49aa-8a6c-bbbc0c658f6e name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:41:25 no-preload-397724 crio[709]: time="2024-08-29 20:41:25.039424129Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c24672f1f33fab0a363ffe9b15191c033aad911504fe4e0cbbb0c54723fc61d,PodSandboxId:c0f16c27a76d494a7575865408ce9d80ab96b703ee14687cc914a0ad479ebdb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724963538528655210,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b6c02d6-7a39-4fea-80b4-4ba02904232c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f2452b7e2499acfc2dda22691859915ef9b81d7ffa9020ad6044fe06095263,PodSandboxId:b6a71d1267b1556242a109dff2d1c47914b3e44249c44e2e79491bfc580ab454,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963537998920202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dw2r7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6edda799-e2d6-402b-b4cd-7e54b2b89ca5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e17dc0c760cefbdf280368ae8c71df7387ad9525e6132df6f10fc3d87b5febc4,PodSandboxId:9db6339227b53906ea9bd813539fc15515997e0afaa8bffd10f0627a9c54b0d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963537921508774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-crgtj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4
8571a8-18ae-4737-a05b-4a77736aee35,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c37f561b080360811f0545e64ed61159766fde4836a414f54eb67de63ca057,PodSandboxId:bd7adf539fa384b63c54bfed8c530511359adc9cd0c7c18ae12e7f6227c93a6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724963537480521948,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f4x4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb76dc5a-016a-416c-8880-f76fc2d2a9bb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881dd3e8ca191bf84b9cc769c8e846c152f9ad7f469e4427002254e2cb7b68df,PodSandboxId:53daf6f331dc11caec7e319b2e0e1d7d90feb2ea32ead44369bdba115bda9776,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724963526065761939,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c22a6e7c8785486b291da7b93159617,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e94f96d5c169a8e81bd28b96c8f06b5aad7264b27f5af8cbff1efd443c250d2e,PodSandboxId:46d7177c94efa372b28fab29aa3b74e62e75fa40225a7124f7f226a1ef213c1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724963526016815685,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0efa7e4e455814b5713ed169940ea21d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bae575fecaf12208b623dd35e7c88b4f7b71a2552366e495103d134160e9a9,PodSandboxId:858897f5bdf7b4100e9cf511241850678dccf7b17f893ffe9ae203017fd7c2e4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724963526012860464,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ef3e1a8cc24cb25fbef1929ff100cc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195ec5617f33e9a305c0aed8715efa8f6a9dff5ccf8e7048450a4deb2876fdc0,PodSandboxId:16813450eb8a8992ef9278b776f981a88ebcec8713d93b3fc5cf2a3a5d561cf6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724963525948309522,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b09ae8907a140762fbc0f45d1cffb624,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7a88c36402647c6d49b743bc7067f62fc094cebffedd5a342d34603edb45ae,PodSandboxId:11b530f514ff60ea17a5981f4f7734ea771a8480105aec8407c552dece3f6554,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724963239022892416,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0efa7e4e455814b5713ed169940ea21d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=326ec9bb-ff9f-49aa-8a6c-bbbc0c658f6e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8c24672f1f33f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   c0f16c27a76d4       storage-provisioner
	e6f2452b7e249       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   b6a71d1267b15       coredns-6f6b679f8f-dw2r7
	e17dc0c760cef       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   9db6339227b53       coredns-6f6b679f8f-crgtj
	d9c37f561b080       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   bd7adf539fa38       kube-proxy-f4x4j
	881dd3e8ca191       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   53daf6f331dc1       etcd-no-preload-397724
	e94f96d5c169a       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   46d7177c94efa       kube-apiserver-no-preload-397724
	c8bae575fecaf       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   858897f5bdf7b       kube-controller-manager-no-preload-397724
	195ec5617f33e       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   16813450eb8a8       kube-scheduler-no-preload-397724
	3a7a88c364026       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   11b530f514ff6       kube-apiserver-no-preload-397724
	
	
	==> coredns [e17dc0c760cefbdf280368ae8c71df7387ad9525e6132df6f10fc3d87b5febc4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [e6f2452b7e2499acfc2dda22691859915ef9b81d7ffa9020ad6044fe06095263] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-397724
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-397724
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=no-preload-397724
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T20_32_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 20:32:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-397724
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 20:41:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 20:37:28 +0000   Thu, 29 Aug 2024 20:32:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 20:37:28 +0000   Thu, 29 Aug 2024 20:32:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 20:37:28 +0000   Thu, 29 Aug 2024 20:32:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 20:37:28 +0000   Thu, 29 Aug 2024 20:32:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.214
	  Hostname:    no-preload-397724
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2d47525907d449e0bf8771dfb0d73935
	  System UUID:                2d475259-07d4-49e0-bf87-71dfb0d73935
	  Boot ID:                    f666bf1d-77bd-4dc3-9631-492300f9bc26
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-crgtj                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m9s
	  kube-system                 coredns-6f6b679f8f-dw2r7                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m9s
	  kube-system                 etcd-no-preload-397724                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m14s
	  kube-system                 kube-apiserver-no-preload-397724             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 kube-controller-manager-no-preload-397724    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 kube-proxy-f4x4j                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m9s
	  kube-system                 kube-scheduler-no-preload-397724             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 metrics-server-6867b74b74-nxdc5              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m7s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m20s (x8 over 9m20s)  kubelet          Node no-preload-397724 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s (x8 over 9m20s)  kubelet          Node no-preload-397724 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s (x7 over 9m20s)  kubelet          Node no-preload-397724 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m14s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m14s                  kubelet          Node no-preload-397724 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m14s                  kubelet          Node no-preload-397724 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m14s                  kubelet          Node no-preload-397724 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m10s                  node-controller  Node no-preload-397724 event: Registered Node no-preload-397724 in Controller
	
	
	==> dmesg <==
	[  +0.054526] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042457] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.120725] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.707964] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.652959] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.838431] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.061804] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056790] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.168554] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.137221] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[Aug29 20:27] systemd-fstab-generator[700]: Ignoring "noauto" option for root device
	[ +15.599465] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +0.059865] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.465393] systemd-fstab-generator[1418]: Ignoring "noauto" option for root device
	[  +4.713121] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.732141] kauditd_printk_skb: 86 callbacks suppressed
	[Aug29 20:32] systemd-fstab-generator[3083]: Ignoring "noauto" option for root device
	[  +0.064825] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.486867] systemd-fstab-generator[3401]: Ignoring "noauto" option for root device
	[  +0.094537] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.314793] systemd-fstab-generator[3533]: Ignoring "noauto" option for root device
	[  +0.110003] kauditd_printk_skb: 12 callbacks suppressed
	[Aug29 20:33] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [881dd3e8ca191bf84b9cc769c8e846c152f9ad7f469e4427002254e2cb7b68df] <==
	{"level":"info","ts":"2024-08-29T20:32:06.559052Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-29T20:32:06.559296Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.214:2380"}
	{"level":"info","ts":"2024-08-29T20:32:06.559332Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.214:2380"}
	{"level":"info","ts":"2024-08-29T20:32:06.562013Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"3dc9612c0afb3334","initial-advertise-peer-urls":["https://192.168.50.214:2380"],"listen-peer-urls":["https://192.168.50.214:2380"],"advertise-client-urls":["https://192.168.50.214:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.214:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-29T20:32:06.562085Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-29T20:32:07.062106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dc9612c0afb3334 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-29T20:32:07.062266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dc9612c0afb3334 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-29T20:32:07.062303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dc9612c0afb3334 received MsgPreVoteResp from 3dc9612c0afb3334 at term 1"}
	{"level":"info","ts":"2024-08-29T20:32:07.062343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dc9612c0afb3334 became candidate at term 2"}
	{"level":"info","ts":"2024-08-29T20:32:07.062367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dc9612c0afb3334 received MsgVoteResp from 3dc9612c0afb3334 at term 2"}
	{"level":"info","ts":"2024-08-29T20:32:07.062442Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dc9612c0afb3334 became leader at term 2"}
	{"level":"info","ts":"2024-08-29T20:32:07.062492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3dc9612c0afb3334 elected leader 3dc9612c0afb3334 at term 2"}
	{"level":"info","ts":"2024-08-29T20:32:07.069304Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"3dc9612c0afb3334","local-member-attributes":"{Name:no-preload-397724 ClientURLs:[https://192.168.50.214:2379]}","request-path":"/0/members/3dc9612c0afb3334/attributes","cluster-id":"6c00e6cf347ec681","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-29T20:32:07.069589Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T20:32:07.070078Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T20:32:07.070851Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T20:32:07.073871Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-29T20:32:07.074021Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-29T20:32:07.077071Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-29T20:32:07.077595Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T20:32:07.071097Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T20:32:07.084642Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.214:2379"}
	{"level":"info","ts":"2024-08-29T20:32:07.087055Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6c00e6cf347ec681","local-member-id":"3dc9612c0afb3334","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T20:32:07.087155Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T20:32:07.087207Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 20:41:25 up 14 min,  0 users,  load average: 0.06, 0.12, 0.11
	Linux no-preload-397724 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3a7a88c36402647c6d49b743bc7067f62fc094cebffedd5a342d34603edb45ae] <==
	W0829 20:31:58.995673       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.006575       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.023061       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.098656       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.133120       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.194039       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.212638       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.256050       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.281570       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.342157       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.345647       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.496642       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.521302       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.580310       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.587199       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.695403       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.696690       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.767406       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:32:00.011051       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:32:00.021672       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:32:00.283029       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:32:00.308398       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:32:02.763200       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:32:03.130892       1 logging.go:55] [core] [Channel #13 SubChannel #16]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:32:03.398040       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e94f96d5c169a8e81bd28b96c8f06b5aad7264b27f5af8cbff1efd443c250d2e] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0829 20:37:09.873149       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 20:37:09.873416       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0829 20:37:09.874579       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 20:37:09.874644       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0829 20:38:09.875659       1 handler_proxy.go:99] no RequestInfo found in the context
	W0829 20:38:09.875714       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 20:38:09.875909       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0829 20:38:09.876079       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0829 20:38:09.877309       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 20:38:09.877386       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0829 20:40:09.878263       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 20:40:09.878680       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0829 20:40:09.878823       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 20:40:09.878897       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0829 20:40:09.879883       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 20:40:09.880043       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c8bae575fecaf12208b623dd35e7c88b4f7b71a2552366e495103d134160e9a9] <==
	E0829 20:36:15.767865       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:36:16.300334       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:36:45.774407       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:36:46.308924       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:37:15.782447       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:37:16.318272       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0829 20:37:28.340410       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-397724"
	E0829 20:37:45.790124       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:37:46.327796       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0829 20:38:08.524603       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="284.437µs"
	E0829 20:38:15.795931       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:38:16.335679       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0829 20:38:22.520026       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="121.545µs"
	E0829 20:38:45.802743       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:38:46.343457       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:39:15.810227       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:39:16.351528       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:39:45.817550       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:39:46.358883       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:40:15.824671       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:40:16.366664       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:40:45.831620       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:40:46.374867       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:41:15.838245       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:41:16.385013       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [d9c37f561b080360811f0545e64ed61159766fde4836a414f54eb67de63ca057] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 20:32:18.171233       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 20:32:18.311653       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.214"]
	E0829 20:32:18.311788       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 20:32:18.497065       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 20:32:18.497195       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 20:32:18.497244       1 server_linux.go:169] "Using iptables Proxier"
	I0829 20:32:18.511174       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 20:32:18.511395       1 server.go:483] "Version info" version="v1.31.0"
	I0829 20:32:18.511405       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 20:32:18.513244       1 config.go:197] "Starting service config controller"
	I0829 20:32:18.513274       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 20:32:18.513292       1 config.go:104] "Starting endpoint slice config controller"
	I0829 20:32:18.513296       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 20:32:18.527349       1 config.go:326] "Starting node config controller"
	I0829 20:32:18.527361       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 20:32:18.615385       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 20:32:18.615425       1 shared_informer.go:320] Caches are synced for service config
	I0829 20:32:18.629252       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [195ec5617f33e9a305c0aed8715efa8f6a9dff5ccf8e7048450a4deb2876fdc0] <==
	W0829 20:32:08.895194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 20:32:08.895584       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 20:32:08.895208       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 20:32:08.895650       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 20:32:08.895779       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0829 20:32:08.895900       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 20:32:08.896104       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0829 20:32:08.896135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 20:32:08.896315       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 20:32:08.896344       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 20:32:09.719664       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0829 20:32:09.719726       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 20:32:09.808753       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0829 20:32:09.808802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 20:32:09.957135       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 20:32:09.957185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 20:32:09.974709       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0829 20:32:09.974774       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 20:32:10.001155       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0829 20:32:10.001203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 20:32:10.005297       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 20:32:10.005384       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 20:32:10.173290       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0829 20:32:10.173441       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0829 20:32:12.773291       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 20:40:11 no-preload-397724 kubelet[3408]: E0829 20:40:11.683279    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964011682507160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:40:20 no-preload-397724 kubelet[3408]: E0829 20:40:20.504897    3408 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-nxdc5" podUID="6061e81d-2f14-4c4a-9e0f-acb57dc9fb5a"
	Aug 29 20:40:21 no-preload-397724 kubelet[3408]: E0829 20:40:21.685243    3408 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964021684495681,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:40:21 no-preload-397724 kubelet[3408]: E0829 20:40:21.685601    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964021684495681,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:40:31 no-preload-397724 kubelet[3408]: E0829 20:40:31.687013    3408 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964031686718828,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:40:31 no-preload-397724 kubelet[3408]: E0829 20:40:31.687095    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964031686718828,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:40:35 no-preload-397724 kubelet[3408]: E0829 20:40:35.506455    3408 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-nxdc5" podUID="6061e81d-2f14-4c4a-9e0f-acb57dc9fb5a"
	Aug 29 20:40:41 no-preload-397724 kubelet[3408]: E0829 20:40:41.691027    3408 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964041689893967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:40:41 no-preload-397724 kubelet[3408]: E0829 20:40:41.691706    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964041689893967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:40:47 no-preload-397724 kubelet[3408]: E0829 20:40:47.505137    3408 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-nxdc5" podUID="6061e81d-2f14-4c4a-9e0f-acb57dc9fb5a"
	Aug 29 20:40:51 no-preload-397724 kubelet[3408]: E0829 20:40:51.693824    3408 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964051693605537,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:40:51 no-preload-397724 kubelet[3408]: E0829 20:40:51.693880    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964051693605537,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:41:01 no-preload-397724 kubelet[3408]: E0829 20:41:01.505755    3408 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-nxdc5" podUID="6061e81d-2f14-4c4a-9e0f-acb57dc9fb5a"
	Aug 29 20:41:01 no-preload-397724 kubelet[3408]: E0829 20:41:01.695767    3408 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964061695300581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:41:01 no-preload-397724 kubelet[3408]: E0829 20:41:01.695842    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964061695300581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:41:11 no-preload-397724 kubelet[3408]: E0829 20:41:11.530034    3408 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 20:41:11 no-preload-397724 kubelet[3408]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 20:41:11 no-preload-397724 kubelet[3408]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 20:41:11 no-preload-397724 kubelet[3408]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 20:41:11 no-preload-397724 kubelet[3408]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 20:41:11 no-preload-397724 kubelet[3408]: E0829 20:41:11.698232    3408 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964071697688319,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:41:11 no-preload-397724 kubelet[3408]: E0829 20:41:11.698337    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964071697688319,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:41:12 no-preload-397724 kubelet[3408]: E0829 20:41:12.505891    3408 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-nxdc5" podUID="6061e81d-2f14-4c4a-9e0f-acb57dc9fb5a"
	Aug 29 20:41:21 no-preload-397724 kubelet[3408]: E0829 20:41:21.700744    3408 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964081700014408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:41:21 no-preload-397724 kubelet[3408]: E0829 20:41:21.701101    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964081700014408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [8c24672f1f33fab0a363ffe9b15191c033aad911504fe4e0cbbb0c54723fc61d] <==
	I0829 20:32:18.658603       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 20:32:18.689186       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 20:32:18.689581       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 20:32:18.717028       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 20:32:18.717478       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-397724_b7275b61-9d3a-4e91-9372-220d5ce9c8ee!
	I0829 20:32:18.717707       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"866499d7-741c-49ff-95ed-e7e5f962ef68", APIVersion:"v1", ResourceVersion:"440", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-397724_b7275b61-9d3a-4e91-9372-220d5ce9c8ee became leader
	I0829 20:32:18.821054       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-397724_b7275b61-9d3a-4e91-9372-220d5ce9c8ee!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-397724 -n no-preload-397724
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-397724 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-nxdc5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-397724 describe pod metrics-server-6867b74b74-nxdc5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-397724 describe pod metrics-server-6867b74b74-nxdc5: exit status 1 (61.136279ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-nxdc5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-397724 describe pod metrics-server-6867b74b74-nxdc5: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
E0829 20:36:37.943265   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
E0829 20:36:49.047286   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
E0829 20:38:45.975311   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
E0829 20:41:37.942916   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-032002 -n old-k8s-version-032002
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-032002 -n old-k8s-version-032002: exit status 2 (221.404537ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-032002" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-032002 -n old-k8s-version-032002
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-032002 -n old-k8s-version-032002: exit status 2 (219.804723ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-032002 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-032002 logs -n 25: (1.535165676s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-397724                                   | no-preload-397724            | jenkins | v1.33.1 | 29 Aug 24 20:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-388383            | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:18 UTC | 29 Aug 24 20:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-388383                                  | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-695305             | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:19 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-695305                  | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-695305 --memory=2200 --alsologtostderr   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:20 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-695305 image list                           | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	| delete  | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	| start   | -p                                                     | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:21 UTC |
	|         | default-k8s-diff-port-145096                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-032002        | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-397724                  | no-preload-397724            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-397724                                   | no-preload-397724            | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC | 29 Aug 24 20:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-388383                 | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-388383                                  | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC | 29 Aug 24 20:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-145096  | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC | 29 Aug 24 20:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC |                     |
	|         | default-k8s-diff-port-145096                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-032002                              | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:22 UTC | 29 Aug 24 20:22 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-032002             | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:22 UTC | 29 Aug 24 20:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-032002                              | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:22 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-145096       | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:24 UTC | 29 Aug 24 20:31 UTC |
	|         | default-k8s-diff-port-145096                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 20:24:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 20:24:16.618808   68084 out.go:345] Setting OutFile to fd 1 ...
	I0829 20:24:16.619043   68084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:24:16.619051   68084 out.go:358] Setting ErrFile to fd 2...
	I0829 20:24:16.619055   68084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:24:16.619206   68084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 20:24:16.619741   68084 out.go:352] Setting JSON to false
	I0829 20:24:16.620649   68084 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7604,"bootTime":1724955453,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 20:24:16.620702   68084 start.go:139] virtualization: kvm guest
	I0829 20:24:16.622891   68084 out.go:177] * [default-k8s-diff-port-145096] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 20:24:16.624228   68084 out.go:177]   - MINIKUBE_LOCATION=19530
	I0829 20:24:16.624256   68084 notify.go:220] Checking for updates...
	I0829 20:24:16.627123   68084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 20:24:16.628611   68084 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:24:16.629858   68084 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 20:24:16.631013   68084 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 20:24:16.632116   68084 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 20:24:16.633630   68084 config.go:182] Loaded profile config "default-k8s-diff-port-145096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:24:16.634042   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:24:16.634080   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:24:16.648879   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36381
	I0829 20:24:16.649315   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:24:16.649875   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:24:16.649893   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:24:16.650274   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:24:16.650504   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:24:16.650776   68084 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 20:24:16.651053   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:24:16.651111   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:24:16.665964   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33615
	I0829 20:24:16.666402   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:24:16.666918   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:24:16.666937   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:24:16.667250   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:24:16.667435   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:24:16.698712   68084 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 20:24:16.700010   68084 start.go:297] selected driver: kvm2
	I0829 20:24:16.700023   68084 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-145096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:24:16.700131   68084 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 20:24:16.700915   68084 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 20:24:16.700998   68084 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19530-11185/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 20:24:16.715940   68084 install.go:137] /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0829 20:24:16.716321   68084 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:24:16.716388   68084 cni.go:84] Creating CNI manager for ""
	I0829 20:24:16.716405   68084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:24:16.716452   68084 start.go:340] cluster config:
	{Name:default-k8s-diff-port-145096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:24:16.716563   68084 iso.go:125] acquiring lock: {Name:mk1c9d3ac7f423dd4657884e37bdf4359f6328d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 20:24:16.718175   68084 out.go:177] * Starting "default-k8s-diff-port-145096" primary control-plane node in "default-k8s-diff-port-145096" cluster
	I0829 20:24:16.258820   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:16.719204   68084 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:24:16.719231   68084 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 20:24:16.719237   68084 cache.go:56] Caching tarball of preloaded images
	I0829 20:24:16.719296   68084 preload.go:172] Found /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 20:24:16.719305   68084 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 20:24:16.719385   68084 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/config.json ...
	I0829 20:24:16.719549   68084 start.go:360] acquireMachinesLock for default-k8s-diff-port-145096: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 20:24:22.338805   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:25.410778   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:31.490844   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:34.562885   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:40.642793   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:43.714939   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:49.794765   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:52.866858   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:58.946771   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:02.018832   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:08.098829   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:11.170833   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:17.250794   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:20.322926   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:26.402827   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:29.474844   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:35.554771   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:38.626850   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:41.630257   66989 start.go:364] duration metric: took 4m26.950412835s to acquireMachinesLock for "embed-certs-388383"
	I0829 20:25:41.630308   66989 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:25:41.630316   66989 fix.go:54] fixHost starting: 
	I0829 20:25:41.630791   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:25:41.630828   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:25:41.646005   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32873
	I0829 20:25:41.646405   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:25:41.646932   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:25:41.646959   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:25:41.647308   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:25:41.647525   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:25:41.647686   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:25:41.649457   66989 fix.go:112] recreateIfNeeded on embed-certs-388383: state=Stopped err=<nil>
	I0829 20:25:41.649491   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	W0829 20:25:41.649639   66989 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:25:41.651109   66989 out.go:177] * Restarting existing kvm2 VM for "embed-certs-388383" ...
	I0829 20:25:41.627651   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:25:41.627705   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:25:41.628067   66841 buildroot.go:166] provisioning hostname "no-preload-397724"
	I0829 20:25:41.628089   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:25:41.628259   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:25:41.630106   66841 machine.go:96] duration metric: took 4m35.46951337s to provisionDockerMachine
	I0829 20:25:41.630148   66841 fix.go:56] duration metric: took 4m35.494271139s for fixHost
	I0829 20:25:41.630159   66841 start.go:83] releasing machines lock for "no-preload-397724", held for 4m35.494325078s
	W0829 20:25:41.630182   66841 start.go:714] error starting host: provision: host is not running
	W0829 20:25:41.630284   66841 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0829 20:25:41.630295   66841 start.go:729] Will try again in 5 seconds ...
	I0829 20:25:41.652159   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Start
	I0829 20:25:41.652318   66989 main.go:141] libmachine: (embed-certs-388383) Ensuring networks are active...
	I0829 20:25:41.653011   66989 main.go:141] libmachine: (embed-certs-388383) Ensuring network default is active
	I0829 20:25:41.653426   66989 main.go:141] libmachine: (embed-certs-388383) Ensuring network mk-embed-certs-388383 is active
	I0829 20:25:41.653824   66989 main.go:141] libmachine: (embed-certs-388383) Getting domain xml...
	I0829 20:25:41.654765   66989 main.go:141] libmachine: (embed-certs-388383) Creating domain...
	I0829 20:25:42.860512   66989 main.go:141] libmachine: (embed-certs-388383) Waiting to get IP...
	I0829 20:25:42.861297   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:42.861661   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:42.861739   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:42.861649   68412 retry.go:31] will retry after 207.172422ms: waiting for machine to come up
	I0829 20:25:43.070026   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:43.070414   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:43.070445   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:43.070368   68412 retry.go:31] will retry after 336.815982ms: waiting for machine to come up
	I0829 20:25:43.408817   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:43.409144   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:43.409182   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:43.409117   68412 retry.go:31] will retry after 330.159156ms: waiting for machine to come up
	I0829 20:25:43.740518   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:43.741039   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:43.741065   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:43.741002   68412 retry.go:31] will retry after 528.906592ms: waiting for machine to come up
	I0829 20:25:44.271695   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:44.272286   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:44.272344   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:44.272280   68412 retry.go:31] will retry after 616.92568ms: waiting for machine to come up
	I0829 20:25:46.631383   66841 start.go:360] acquireMachinesLock for no-preload-397724: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 20:25:44.891133   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:44.891535   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:44.891566   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:44.891499   68412 retry.go:31] will retry after 907.330558ms: waiting for machine to come up
	I0829 20:25:45.800480   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:45.800858   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:45.800885   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:45.800840   68412 retry.go:31] will retry after 1.189775318s: waiting for machine to come up
	I0829 20:25:46.992687   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:46.993155   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:46.993189   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:46.993142   68412 retry.go:31] will retry after 1.467244635s: waiting for machine to come up
	I0829 20:25:48.462770   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:48.463201   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:48.463226   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:48.463173   68412 retry.go:31] will retry after 1.602764839s: waiting for machine to come up
	I0829 20:25:50.067082   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:50.067608   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:50.067638   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:50.067543   68412 retry.go:31] will retry after 1.562244323s: waiting for machine to come up
	I0829 20:25:51.632201   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:51.632705   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:51.632731   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:51.632650   68412 retry.go:31] will retry after 1.747220365s: waiting for machine to come up
	I0829 20:25:53.382010   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:53.382463   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:53.382527   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:53.382454   68412 retry.go:31] will retry after 3.446054845s: waiting for machine to come up
	I0829 20:25:56.830511   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:56.830954   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:56.830988   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:56.830908   68412 retry.go:31] will retry after 4.53995219s: waiting for machine to come up
	I0829 20:26:02.603329   67607 start.go:364] duration metric: took 3m23.680319578s to acquireMachinesLock for "old-k8s-version-032002"
	I0829 20:26:02.603393   67607 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:26:02.603404   67607 fix.go:54] fixHost starting: 
	I0829 20:26:02.603837   67607 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:02.603884   67607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:02.621398   67607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35977
	I0829 20:26:02.621840   67607 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:02.622425   67607 main.go:141] libmachine: Using API Version  1
	I0829 20:26:02.622460   67607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:02.622810   67607 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:02.623040   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:02.623201   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetState
	I0829 20:26:02.624854   67607 fix.go:112] recreateIfNeeded on old-k8s-version-032002: state=Stopped err=<nil>
	I0829 20:26:02.624880   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	W0829 20:26:02.625020   67607 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:26:02.627161   67607 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-032002" ...
	I0829 20:26:02.628419   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .Start
	I0829 20:26:02.628578   67607 main.go:141] libmachine: (old-k8s-version-032002) Ensuring networks are active...
	I0829 20:26:02.629339   67607 main.go:141] libmachine: (old-k8s-version-032002) Ensuring network default is active
	I0829 20:26:02.629732   67607 main.go:141] libmachine: (old-k8s-version-032002) Ensuring network mk-old-k8s-version-032002 is active
	I0829 20:26:02.630188   67607 main.go:141] libmachine: (old-k8s-version-032002) Getting domain xml...
	I0829 20:26:02.630924   67607 main.go:141] libmachine: (old-k8s-version-032002) Creating domain...
	I0829 20:26:01.375542   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.375928   66989 main.go:141] libmachine: (embed-certs-388383) Found IP for machine: 192.168.61.202
	I0829 20:26:01.375951   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has current primary IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.375974   66989 main.go:141] libmachine: (embed-certs-388383) Reserving static IP address...
	I0829 20:26:01.376364   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "embed-certs-388383", mac: "52:54:00:6c:5a:0c", ip: "192.168.61.202"} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.376398   66989 main.go:141] libmachine: (embed-certs-388383) DBG | skip adding static IP to network mk-embed-certs-388383 - found existing host DHCP lease matching {name: "embed-certs-388383", mac: "52:54:00:6c:5a:0c", ip: "192.168.61.202"}
	I0829 20:26:01.376411   66989 main.go:141] libmachine: (embed-certs-388383) Reserved static IP address: 192.168.61.202
	I0829 20:26:01.376428   66989 main.go:141] libmachine: (embed-certs-388383) Waiting for SSH to be available...
	I0829 20:26:01.376445   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Getting to WaitForSSH function...
	I0829 20:26:01.378600   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.378899   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.378937   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.379065   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Using SSH client type: external
	I0829 20:26:01.379088   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa (-rw-------)
	I0829 20:26:01.379118   66989 main.go:141] libmachine: (embed-certs-388383) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:01.379132   66989 main.go:141] libmachine: (embed-certs-388383) DBG | About to run SSH command:
	I0829 20:26:01.379141   66989 main.go:141] libmachine: (embed-certs-388383) DBG | exit 0
	I0829 20:26:01.498736   66989 main.go:141] libmachine: (embed-certs-388383) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:01.499103   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetConfigRaw
	I0829 20:26:01.499700   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetIP
	I0829 20:26:01.502022   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.502332   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.502362   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.502586   66989 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/config.json ...
	I0829 20:26:01.502778   66989 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:01.502795   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:01.502980   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.505156   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.505452   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.505473   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.505590   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:01.505739   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.505902   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.506038   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:01.506183   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:01.506366   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:01.506376   66989 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:01.602691   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:01.602721   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetMachineName
	I0829 20:26:01.603002   66989 buildroot.go:166] provisioning hostname "embed-certs-388383"
	I0829 20:26:01.603033   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetMachineName
	I0829 20:26:01.603232   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.605841   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.606170   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.606201   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.606333   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:01.606505   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.606672   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.606786   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:01.606950   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:01.607121   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:01.607144   66989 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-388383 && echo "embed-certs-388383" | sudo tee /etc/hostname
	I0829 20:26:01.717669   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-388383
	
	I0829 20:26:01.717709   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.720400   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.720705   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.720733   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.720863   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:01.721097   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.721280   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.721446   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:01.721585   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:01.721811   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:01.721842   66989 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-388383' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-388383/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-388383' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:01.827800   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:01.827835   66989 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:01.827869   66989 buildroot.go:174] setting up certificates
	I0829 20:26:01.827882   66989 provision.go:84] configureAuth start
	I0829 20:26:01.827894   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetMachineName
	I0829 20:26:01.828214   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetIP
	I0829 20:26:01.830619   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.831150   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.831184   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.831339   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.833642   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.833961   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.833987   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.834161   66989 provision.go:143] copyHostCerts
	I0829 20:26:01.834217   66989 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:01.834241   66989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:01.834322   66989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:01.834445   66989 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:01.834457   66989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:01.834491   66989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:01.834608   66989 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:01.834621   66989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:01.834660   66989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:01.834726   66989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.embed-certs-388383 san=[127.0.0.1 192.168.61.202 embed-certs-388383 localhost minikube]
	I0829 20:26:01.992735   66989 provision.go:177] copyRemoteCerts
	I0829 20:26:01.992794   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:01.992819   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.995463   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.995835   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.995862   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.996006   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:01.996179   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.996333   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:01.996460   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:02.077017   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:02.105498   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0829 20:26:02.133974   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 20:26:02.161330   66989 provision.go:87] duration metric: took 333.435119ms to configureAuth
	I0829 20:26:02.161362   66989 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:02.161579   66989 config.go:182] Loaded profile config "embed-certs-388383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:26:02.161707   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.164373   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.164696   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.164724   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.164909   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.165111   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.165276   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.165402   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.165535   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:02.165697   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:02.165711   66989 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:02.377994   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:02.378022   66989 machine.go:96] duration metric: took 875.231112ms to provisionDockerMachine
	I0829 20:26:02.378037   66989 start.go:293] postStartSetup for "embed-certs-388383" (driver="kvm2")
	I0829 20:26:02.378053   66989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:02.378078   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.378404   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:02.378432   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.380920   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.381329   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.381358   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.381564   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.381797   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.381975   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.382124   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:02.461053   66989 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:02.465391   66989 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:02.465417   66989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:02.465479   66989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:02.465550   66989 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:02.465635   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:02.474909   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:02.500025   66989 start.go:296] duration metric: took 121.973853ms for postStartSetup
	I0829 20:26:02.500064   66989 fix.go:56] duration metric: took 20.86974885s for fixHost
	I0829 20:26:02.500082   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.502976   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.503380   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.503411   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.503599   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.503808   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.503976   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.504126   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.504283   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:02.504459   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:02.504469   66989 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:02.603161   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963162.568310162
	
	I0829 20:26:02.603181   66989 fix.go:216] guest clock: 1724963162.568310162
	I0829 20:26:02.603187   66989 fix.go:229] Guest: 2024-08-29 20:26:02.568310162 +0000 UTC Remote: 2024-08-29 20:26:02.500067292 +0000 UTC m=+288.185978445 (delta=68.24287ms)
	I0829 20:26:02.603210   66989 fix.go:200] guest clock delta is within tolerance: 68.24287ms
	I0829 20:26:02.603216   66989 start.go:83] releasing machines lock for "embed-certs-388383", held for 20.972921408s
	I0829 20:26:02.603248   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.603532   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetIP
	I0829 20:26:02.606426   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.606804   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.606834   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.607021   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.607527   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.607694   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.607770   66989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:02.607809   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.607878   66989 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:02.607896   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.610239   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.610264   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.610657   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.610685   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.610723   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.610742   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.610844   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.611014   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.611014   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.611145   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.611208   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.611268   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.611341   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:02.611399   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:02.712435   66989 ssh_runner.go:195] Run: systemctl --version
	I0829 20:26:02.718614   66989 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:26:02.865138   66989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:26:02.871510   66989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:26:02.871593   66989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:26:02.887316   66989 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:26:02.887340   66989 start.go:495] detecting cgroup driver to use...
	I0829 20:26:02.887394   66989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:26:02.905024   66989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:26:02.918922   66989 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:26:02.918986   66989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:26:02.932660   66989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:26:02.946679   66989 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:26:03.056273   66989 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:26:03.216885   66989 docker.go:233] disabling docker service ...
	I0829 20:26:03.216959   66989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:26:03.231363   66989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:26:03.245609   66989 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:26:03.368087   66989 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:26:03.493947   66989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:26:03.508803   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:26:03.527542   66989 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 20:26:03.527607   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.538301   66989 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:26:03.538370   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.549672   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.562203   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.573572   66989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:26:03.585031   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.596778   66989 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.619405   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.630337   66989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:26:03.640492   66989 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:26:03.640568   66989 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:26:03.657931   66989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:26:03.673756   66989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:03.792856   66989 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:26:03.880493   66989 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:26:03.880551   66989 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:26:03.885793   66989 start.go:563] Will wait 60s for crictl version
	I0829 20:26:03.885850   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:26:03.889835   66989 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:26:03.928633   66989 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:26:03.928702   66989 ssh_runner.go:195] Run: crio --version
	I0829 20:26:03.958861   66989 ssh_runner.go:195] Run: crio --version
	I0829 20:26:03.987724   66989 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 20:26:03.989009   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetIP
	I0829 20:26:03.991889   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:03.992308   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:03.992334   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:03.992567   66989 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0829 20:26:03.996945   66989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:04.009353   66989 kubeadm.go:883] updating cluster {Name:embed-certs-388383 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-388383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:26:04.009462   66989 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:26:04.009501   66989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:04.051583   66989 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 20:26:04.051643   66989 ssh_runner.go:195] Run: which lz4
	I0829 20:26:04.055929   66989 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 20:26:04.060214   66989 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 20:26:04.060240   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 20:26:03.867691   67607 main.go:141] libmachine: (old-k8s-version-032002) Waiting to get IP...
	I0829 20:26:03.868798   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:03.869246   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:03.869318   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:03.869235   68552 retry.go:31] will retry after 220.928648ms: waiting for machine to come up
	I0829 20:26:04.091675   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:04.092057   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:04.092084   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:04.092020   68552 retry.go:31] will retry after 352.781755ms: waiting for machine to come up
	I0829 20:26:04.446766   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:04.447277   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:04.447301   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:04.447224   68552 retry.go:31] will retry after 480.96031ms: waiting for machine to come up
	I0829 20:26:04.929561   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:04.930149   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:04.930181   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:04.930051   68552 retry.go:31] will retry after 415.057247ms: waiting for machine to come up
	I0829 20:26:05.346757   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:05.347224   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:05.347258   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:05.347196   68552 retry.go:31] will retry after 609.958508ms: waiting for machine to come up
	I0829 20:26:05.959227   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:05.959774   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:05.959825   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:05.959702   68552 retry.go:31] will retry after 680.801337ms: waiting for machine to come up
	I0829 20:26:06.642811   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:06.643312   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:06.643343   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:06.643269   68552 retry.go:31] will retry after 995.561322ms: waiting for machine to come up
	I0829 20:26:07.640147   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:07.640617   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:07.640652   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:07.640588   68552 retry.go:31] will retry after 1.22436043s: waiting for machine to come up
	I0829 20:26:05.472272   66989 crio.go:462] duration metric: took 1.416373513s to copy over tarball
	I0829 20:26:05.472355   66989 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 20:26:07.583560   66989 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.111164398s)
	I0829 20:26:07.583595   66989 crio.go:469] duration metric: took 2.111297179s to extract the tarball
	I0829 20:26:07.583605   66989 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 20:26:07.622447   66989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:07.671704   66989 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 20:26:07.671732   66989 cache_images.go:84] Images are preloaded, skipping loading
	I0829 20:26:07.671742   66989 kubeadm.go:934] updating node { 192.168.61.202 8443 v1.31.0 crio true true} ...
	I0829 20:26:07.671869   66989 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-388383 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-388383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:26:07.671958   66989 ssh_runner.go:195] Run: crio config
	I0829 20:26:07.717217   66989 cni.go:84] Creating CNI manager for ""
	I0829 20:26:07.717242   66989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:07.717263   66989 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:26:07.717290   66989 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.202 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-388383 NodeName:embed-certs-388383 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 20:26:07.717465   66989 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-388383"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:26:07.717549   66989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 20:26:07.727174   66989 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:26:07.727258   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:26:07.736512   66989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0829 20:26:07.752727   66989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:26:07.772430   66989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0829 20:26:07.793343   66989 ssh_runner.go:195] Run: grep 192.168.61.202	control-plane.minikube.internal$ /etc/hosts
	I0829 20:26:07.798214   66989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:07.811285   66989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:07.927025   66989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:07.943741   66989 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383 for IP: 192.168.61.202
	I0829 20:26:07.943765   66989 certs.go:194] generating shared ca certs ...
	I0829 20:26:07.943784   66989 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:07.943984   66989 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:26:07.944047   66989 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:26:07.944061   66989 certs.go:256] generating profile certs ...
	I0829 20:26:07.944177   66989 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/client.key
	I0829 20:26:07.944254   66989 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/apiserver.key.03b29390
	I0829 20:26:07.944317   66989 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/proxy-client.key
	I0829 20:26:07.944494   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:26:07.944538   66989 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:26:07.944551   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:26:07.944581   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:26:07.944605   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:26:07.944628   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:26:07.944670   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:07.945252   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:26:07.971277   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:26:08.012892   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:26:08.042038   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:26:08.067708   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0829 20:26:08.095930   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 20:26:08.127171   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:26:08.151287   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 20:26:08.175525   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:26:08.199076   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:26:08.222783   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:26:08.245783   66989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:26:08.261839   66989 ssh_runner.go:195] Run: openssl version
	I0829 20:26:08.267545   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:26:08.278347   66989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:26:08.284232   66989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:26:08.284283   66989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:26:08.292024   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:26:08.306831   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:26:08.320607   66989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:26:08.325027   66989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:26:08.325070   66989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:26:08.330808   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:26:08.341457   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:26:08.352323   66989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:08.356822   66989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:08.356891   66989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:08.362617   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:26:08.373755   66989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:26:08.378153   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:26:08.384225   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:26:08.390136   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:26:08.396002   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:26:08.401713   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:26:08.407437   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:26:08.413033   66989 kubeadm.go:392] StartCluster: {Name:embed-certs-388383 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-388383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:26:08.413119   66989 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:26:08.413173   66989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:08.450685   66989 cri.go:89] found id: ""
	I0829 20:26:08.450757   66989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:26:08.460787   66989 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:26:08.460809   66989 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:26:08.460853   66989 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:26:08.470179   66989 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:26:08.471673   66989 kubeconfig.go:125] found "embed-certs-388383" server: "https://192.168.61.202:8443"
	I0829 20:26:08.474839   66989 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:26:08.483951   66989 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.202
	I0829 20:26:08.483992   66989 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:26:08.484007   66989 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:26:08.484085   66989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:08.525947   66989 cri.go:89] found id: ""
	I0829 20:26:08.526013   66989 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:26:08.541862   66989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:26:08.551179   66989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:26:08.551200   66989 kubeadm.go:157] found existing configuration files:
	
	I0829 20:26:08.551249   66989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:26:08.559897   66989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:26:08.559970   66989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:26:08.569317   66989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:26:08.577858   66989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:26:08.577905   66989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:26:08.587113   66989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:26:08.595645   66989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:26:08.595705   66989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:26:08.604803   66989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:26:08.613070   66989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:26:08.613125   66989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:26:08.622037   66989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:26:08.631330   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:08.742682   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:08.866518   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:08.866954   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:08.866985   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:08.866896   68552 retry.go:31] will retry after 1.707701085s: waiting for machine to come up
	I0829 20:26:10.576676   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:10.577094   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:10.577124   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:10.577047   68552 retry.go:31] will retry after 1.496799212s: waiting for machine to come up
	I0829 20:26:12.075964   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:12.076412   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:12.076451   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:12.076377   68552 retry.go:31] will retry after 2.246779697s: waiting for machine to come up
	I0829 20:26:09.809078   66989 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.066360218s)
	I0829 20:26:09.809118   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:10.027517   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:10.095959   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:10.199656   66989 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:26:10.199745   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:10.700569   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:11.200798   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:11.700664   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:12.200052   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:12.700839   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:12.715319   66989 api_server.go:72] duration metric: took 2.515661322s to wait for apiserver process to appear ...
	I0829 20:26:12.715351   66989 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:26:12.715374   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:15.687527   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:26:15.687558   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:26:15.687572   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:15.716339   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:26:15.716365   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:26:15.716378   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:15.750700   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:15.750732   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:16.216255   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:16.224376   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:16.224401   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:16.715457   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:16.723983   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:16.724004   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:17.215562   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:17.219605   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0829 20:26:17.225473   66989 api_server.go:141] control plane version: v1.31.0
	I0829 20:26:17.225496   66989 api_server.go:131] duration metric: took 4.510137186s to wait for apiserver health ...
	I0829 20:26:17.225504   66989 cni.go:84] Creating CNI manager for ""
	I0829 20:26:17.225509   66989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:17.227379   66989 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:26:14.324452   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:14.324770   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:14.324808   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:14.324748   68552 retry.go:31] will retry after 3.172592587s: waiting for machine to come up
	I0829 20:26:17.500203   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:17.500540   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:17.500573   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:17.500485   68552 retry.go:31] will retry after 2.81386002s: waiting for machine to come up
	I0829 20:26:17.228505   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:26:17.238762   66989 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:26:17.264380   66989 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:26:17.274981   66989 system_pods.go:59] 8 kube-system pods found
	I0829 20:26:17.275009   66989 system_pods.go:61] "coredns-6f6b679f8f-dg6t6" [92e89b20-ebf4-4738-8ca7-9dc2a0e5653a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:26:17.275016   66989 system_pods.go:61] "etcd-embed-certs-388383" [a688325a-9ed2-488d-a1a1-aa440e37fa9f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 20:26:17.275023   66989 system_pods.go:61] "kube-apiserver-embed-certs-388383" [7a1b715b-87a3-44e0-868d-a3184f5b9f61] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 20:26:17.275028   66989 system_pods.go:61] "kube-controller-manager-embed-certs-388383" [9d942083-4d39-448c-8151-424ea9d5e6af] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 20:26:17.275033   66989 system_pods.go:61] "kube-proxy-fcxs4" [649b40c8-4f4b-40d1-8179-baf378d4c7d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 20:26:17.275038   66989 system_pods.go:61] "kube-scheduler-embed-certs-388383" [87b73013-dfad-411d-aaa9-f2c0e39fb920] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 20:26:17.275043   66989 system_pods.go:61] "metrics-server-6867b74b74-mx5jh" [99e21acd-b7b8-4e6f-8c75-c112206aed89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:26:17.275048   66989 system_pods.go:61] "storage-provisioner" [021ca156-b7a8-4647-8efe-db17968fd5a8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 20:26:17.275056   66989 system_pods.go:74] duration metric: took 10.656426ms to wait for pod list to return data ...
	I0829 20:26:17.275074   66989 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:26:17.279480   66989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:26:17.279504   66989 node_conditions.go:123] node cpu capacity is 2
	I0829 20:26:17.279519   66989 node_conditions.go:105] duration metric: took 4.439469ms to run NodePressure ...
	I0829 20:26:17.279537   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:17.561282   66989 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 20:26:17.565287   66989 kubeadm.go:739] kubelet initialised
	I0829 20:26:17.565307   66989 kubeadm.go:740] duration metric: took 4.002605ms waiting for restarted kubelet to initialise ...
	I0829 20:26:17.565314   66989 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:26:17.570104   66989 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:17.576425   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.576454   66989 pod_ready.go:82] duration metric: took 6.324083ms for pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:17.576464   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.576474   66989 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:17.582501   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "etcd-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.582523   66989 pod_ready.go:82] duration metric: took 6.040325ms for pod "etcd-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:17.582547   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "etcd-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.582556   66989 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:17.588534   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.588554   66989 pod_ready.go:82] duration metric: took 5.988678ms for pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:17.588562   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.588568   66989 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:17.668334   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.668365   66989 pod_ready.go:82] duration metric: took 79.787211ms for pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:17.668378   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.668386   66989 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fcxs4" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:18.068248   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "kube-proxy-fcxs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.068286   66989 pod_ready.go:82] duration metric: took 399.880238ms for pod "kube-proxy-fcxs4" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:18.068299   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "kube-proxy-fcxs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.068308   66989 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:18.468096   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.468126   66989 pod_ready.go:82] duration metric: took 399.810823ms for pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:18.468134   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.468141   66989 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:18.868444   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.868478   66989 pod_ready.go:82] duration metric: took 400.329102ms for pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:18.868490   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.868499   66989 pod_ready.go:39] duration metric: took 1.303176044s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:26:18.868519   66989 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 20:26:18.880892   66989 ops.go:34] apiserver oom_adj: -16
	I0829 20:26:18.880916   66989 kubeadm.go:597] duration metric: took 10.42010114s to restartPrimaryControlPlane
	I0829 20:26:18.880925   66989 kubeadm.go:394] duration metric: took 10.467899141s to StartCluster
	I0829 20:26:18.880946   66989 settings.go:142] acquiring lock: {Name:mka4cd5ddff5796cd0ca11509c181178f4f73529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:18.881032   66989 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:26:18.884130   66989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:18.884619   66989 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 20:26:18.884674   66989 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 20:26:18.884749   66989 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-388383"
	I0829 20:26:18.884765   66989 addons.go:69] Setting default-storageclass=true in profile "embed-certs-388383"
	I0829 20:26:18.884783   66989 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-388383"
	W0829 20:26:18.884792   66989 addons.go:243] addon storage-provisioner should already be in state true
	I0829 20:26:18.884804   66989 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-388383"
	I0829 20:26:18.884816   66989 addons.go:69] Setting metrics-server=true in profile "embed-certs-388383"
	I0829 20:26:18.884828   66989 host.go:66] Checking if "embed-certs-388383" exists ...
	I0829 20:26:18.884856   66989 addons.go:234] Setting addon metrics-server=true in "embed-certs-388383"
	W0829 20:26:18.884877   66989 addons.go:243] addon metrics-server should already be in state true
	I0829 20:26:18.884884   66989 config.go:182] Loaded profile config "embed-certs-388383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:26:18.884912   66989 host.go:66] Checking if "embed-certs-388383" exists ...
	I0829 20:26:18.885134   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.885176   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.885216   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.885249   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.885291   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.885338   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.886484   66989 out.go:177] * Verifying Kubernetes components...
	I0829 20:26:18.887938   66989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:18.900910   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33641
	I0829 20:26:18.901377   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.901917   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.901938   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.902300   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.903062   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.903110   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.903810   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I0829 20:26:18.903824   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38101
	I0829 20:26:18.904282   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.904303   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.904673   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.904691   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.904829   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.904845   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.905017   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.905428   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.905462   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.905664   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.905860   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:26:18.909388   66989 addons.go:234] Setting addon default-storageclass=true in "embed-certs-388383"
	W0829 20:26:18.909408   66989 addons.go:243] addon default-storageclass should already be in state true
	I0829 20:26:18.909437   66989 host.go:66] Checking if "embed-certs-388383" exists ...
	I0829 20:26:18.909793   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.909839   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.921180   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35467
	I0829 20:26:18.921597   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.922074   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.922087   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.922470   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.922697   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:26:18.922725   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39123
	I0829 20:26:18.923052   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.923592   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.923610   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.923919   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.924057   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:26:18.924063   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45681
	I0829 20:26:18.924461   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.924519   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:18.924984   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.925002   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.925632   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.925682   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:18.926152   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.926194   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.926494   66989 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:18.927266   66989 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 20:26:18.928130   66989 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:26:18.928141   66989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 20:26:18.928155   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:18.928843   66989 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 20:26:18.928863   66989 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 20:26:18.928888   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:18.931716   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.932273   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:18.932296   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.932424   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.932456   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:18.932644   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:18.932810   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:18.932869   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:18.932891   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.933050   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:18.933100   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:18.933271   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:18.933426   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:18.933598   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:18.942718   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38109
	I0829 20:26:18.943150   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.943532   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.943553   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.943908   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.944027   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:26:18.945304   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:18.945498   66989 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 20:26:18.945510   66989 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 20:26:18.945522   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:18.948108   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.948469   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:18.948494   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.948730   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:18.948889   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:18.949085   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:18.949222   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:19.111953   66989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:19.131195   66989 node_ready.go:35] waiting up to 6m0s for node "embed-certs-388383" to be "Ready" ...
	I0829 20:26:19.246857   66989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:26:19.269511   66989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 20:26:19.269670   66989 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 20:26:19.269691   66989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 20:26:19.346200   66989 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 20:26:19.346234   66989 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 20:26:19.374530   66989 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:26:19.374566   66989 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 20:26:19.418474   66989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:26:20.495022   66989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.225476769s)
	I0829 20:26:20.495077   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.495090   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.495185   66989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.248286753s)
	I0829 20:26:20.495232   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.495249   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.495572   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.495600   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.495611   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.495619   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.495634   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.495663   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Closing plugin on server side
	I0829 20:26:20.495664   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.495678   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.495688   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.496014   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.496029   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.496061   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Closing plugin on server side
	I0829 20:26:20.496097   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.496111   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.504149   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.504182   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.504419   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.504436   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.519341   66989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.100829284s)
	I0829 20:26:20.519396   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.519422   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.519670   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Closing plugin on server side
	I0829 20:26:20.519716   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.519734   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.519746   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.519755   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.520040   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.520055   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.520072   66989 addons.go:475] Verifying addon metrics-server=true in "embed-certs-388383"
	I0829 20:26:20.523102   66989 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0829 20:26:21.515365   68084 start.go:364] duration metric: took 2m4.795762476s to acquireMachinesLock for "default-k8s-diff-port-145096"
	I0829 20:26:21.515428   68084 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:26:21.515439   68084 fix.go:54] fixHost starting: 
	I0829 20:26:21.515864   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:21.515904   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:21.535441   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33171
	I0829 20:26:21.535886   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:21.536390   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:26:21.536414   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:21.536819   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:21.537035   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:21.537203   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:26:21.538735   68084 fix.go:112] recreateIfNeeded on default-k8s-diff-port-145096: state=Stopped err=<nil>
	I0829 20:26:21.538762   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	W0829 20:26:21.538901   68084 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:26:21.540852   68084 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-145096" ...
	I0829 20:26:21.542258   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Start
	I0829 20:26:21.542429   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Ensuring networks are active...
	I0829 20:26:21.543181   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Ensuring network default is active
	I0829 20:26:21.543522   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Ensuring network mk-default-k8s-diff-port-145096 is active
	I0829 20:26:21.543872   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Getting domain xml...
	I0829 20:26:21.544627   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Creating domain...
	I0829 20:26:20.317138   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.317672   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has current primary IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.317700   67607 main.go:141] libmachine: (old-k8s-version-032002) Found IP for machine: 192.168.39.116
	I0829 20:26:20.317716   67607 main.go:141] libmachine: (old-k8s-version-032002) Reserving static IP address...
	I0829 20:26:20.318143   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "old-k8s-version-032002", mac: "52:54:00:a8:ca:96", ip: "192.168.39.116"} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.318169   67607 main.go:141] libmachine: (old-k8s-version-032002) Reserved static IP address: 192.168.39.116
	I0829 20:26:20.318189   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | skip adding static IP to network mk-old-k8s-version-032002 - found existing host DHCP lease matching {name: "old-k8s-version-032002", mac: "52:54:00:a8:ca:96", ip: "192.168.39.116"}
	I0829 20:26:20.318208   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | Getting to WaitForSSH function...
	I0829 20:26:20.318217   67607 main.go:141] libmachine: (old-k8s-version-032002) Waiting for SSH to be available...
	I0829 20:26:20.320598   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.320961   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.320989   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.321082   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | Using SSH client type: external
	I0829 20:26:20.321121   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa (-rw-------)
	I0829 20:26:20.321156   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:20.321171   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | About to run SSH command:
	I0829 20:26:20.321185   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | exit 0
	I0829 20:26:20.446805   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:20.447204   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetConfigRaw
	I0829 20:26:20.447944   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:20.450726   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.451120   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.451160   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.451464   67607 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/config.json ...
	I0829 20:26:20.451670   67607 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:20.451690   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:20.451886   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.454120   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.454496   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.454566   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.454648   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.454808   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.454975   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.455123   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.455282   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:20.455520   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:20.455533   67607 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:20.555074   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:20.555100   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetMachineName
	I0829 20:26:20.555331   67607 buildroot.go:166] provisioning hostname "old-k8s-version-032002"
	I0829 20:26:20.555353   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetMachineName
	I0829 20:26:20.555540   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.558576   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.559058   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.559086   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.559273   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.559490   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.559661   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.559834   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.560026   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:20.560189   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:20.560201   67607 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-032002 && echo "old-k8s-version-032002" | sudo tee /etc/hostname
	I0829 20:26:20.675352   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-032002
	
	I0829 20:26:20.675400   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.678472   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.678908   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.678944   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.679139   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.679341   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.679533   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.679710   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.679884   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:20.680090   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:20.680108   67607 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-032002' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-032002/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-032002' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:20.789673   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:20.789713   67607 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:20.789744   67607 buildroot.go:174] setting up certificates
	I0829 20:26:20.789753   67607 provision.go:84] configureAuth start
	I0829 20:26:20.789761   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetMachineName
	I0829 20:26:20.790067   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:20.792822   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.793152   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.793173   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.793338   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.795624   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.795948   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.795974   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.796080   67607 provision.go:143] copyHostCerts
	I0829 20:26:20.796148   67607 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:20.796168   67607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:20.796236   67607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:20.796344   67607 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:20.796355   67607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:20.796387   67607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:20.796467   67607 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:20.796476   67607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:20.796503   67607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:20.796573   67607 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-032002 san=[127.0.0.1 192.168.39.116 localhost minikube old-k8s-version-032002]
	I0829 20:26:20.906382   67607 provision.go:177] copyRemoteCerts
	I0829 20:26:20.906436   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:20.906466   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.909180   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.909488   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.909519   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.909666   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.909831   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.909963   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.910062   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:20.989017   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:21.018571   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0829 20:26:21.043015   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 20:26:21.067288   67607 provision.go:87] duration metric: took 277.522292ms to configureAuth
	I0829 20:26:21.067322   67607 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:21.067527   67607 config.go:182] Loaded profile config "old-k8s-version-032002": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0829 20:26:21.067607   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.070264   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.070642   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.070679   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.070881   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.071088   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.071288   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.071465   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.071661   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:21.071886   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:21.071923   67607 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:21.290979   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:21.291003   67607 machine.go:96] duration metric: took 839.319831ms to provisionDockerMachine
	I0829 20:26:21.291014   67607 start.go:293] postStartSetup for "old-k8s-version-032002" (driver="kvm2")
	I0829 20:26:21.291026   67607 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:21.291046   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.291342   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:21.291366   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.293946   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.294245   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.294273   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.294464   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.294686   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.294840   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.294964   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:21.373592   67607 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:21.377797   67607 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:21.377826   67607 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:21.377892   67607 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:21.377966   67607 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:21.378054   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:21.387886   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:21.413456   67607 start.go:296] duration metric: took 122.429334ms for postStartSetup
	I0829 20:26:21.413497   67607 fix.go:56] duration metric: took 18.810093949s for fixHost
	I0829 20:26:21.413522   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.416095   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.416391   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.416418   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.416594   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.416803   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.416970   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.417115   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.417272   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:21.417474   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:21.417489   67607 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:21.515167   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963181.486447470
	
	I0829 20:26:21.515190   67607 fix.go:216] guest clock: 1724963181.486447470
	I0829 20:26:21.515200   67607 fix.go:229] Guest: 2024-08-29 20:26:21.48644747 +0000 UTC Remote: 2024-08-29 20:26:21.413502498 +0000 UTC m=+222.629982255 (delta=72.944972ms)
	I0829 20:26:21.515225   67607 fix.go:200] guest clock delta is within tolerance: 72.944972ms
	I0829 20:26:21.515232   67607 start.go:83] releasing machines lock for "old-k8s-version-032002", held for 18.911866017s
	I0829 20:26:21.515278   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.515596   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:21.518247   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.518682   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.518710   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.518835   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.519413   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.519589   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.519680   67607 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:21.519736   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.519843   67607 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:21.519869   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.522261   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.522561   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.522614   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.522643   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.522763   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.522919   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.523044   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.523071   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.523073   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.523241   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.523240   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:21.523413   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.523560   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.523712   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:21.599524   67607 ssh_runner.go:195] Run: systemctl --version
	I0829 20:26:21.629122   67607 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:26:21.778437   67607 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:26:21.784642   67607 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:26:21.784714   67607 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:26:21.802019   67607 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:26:21.802043   67607 start.go:495] detecting cgroup driver to use...
	I0829 20:26:21.802100   67607 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:26:21.817407   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:26:21.831514   67607 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:26:21.831578   67607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:26:21.845224   67607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:26:21.858522   67607 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:26:21.972769   67607 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:26:22.115154   67607 docker.go:233] disabling docker service ...
	I0829 20:26:22.115240   67607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:26:22.130015   67607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:26:22.143186   67607 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:26:22.294113   67607 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:26:22.432373   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:26:22.446427   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:26:22.465151   67607 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0829 20:26:22.465218   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.476104   67607 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:26:22.476177   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.486627   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.497782   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.509869   67607 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:26:22.521347   67607 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:26:22.531406   67607 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:26:22.531455   67607 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:26:22.544949   67607 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:26:22.554918   67607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:22.687909   67607 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:26:22.808522   67607 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:26:22.808595   67607 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:26:22.814348   67607 start.go:563] Will wait 60s for crictl version
	I0829 20:26:22.814411   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:22.818348   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:26:22.863797   67607 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:26:22.863883   67607 ssh_runner.go:195] Run: crio --version
	I0829 20:26:22.893173   67607 ssh_runner.go:195] Run: crio --version
	I0829 20:26:22.923146   67607 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0829 20:26:22.924299   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:22.927222   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:22.927564   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:22.927589   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:22.927772   67607 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 20:26:22.932100   67607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:22.945139   67607 kubeadm.go:883] updating cluster {Name:old-k8s-version-032002 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:26:22.945274   67607 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 20:26:22.945334   67607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:22.990592   67607 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 20:26:22.990668   67607 ssh_runner.go:195] Run: which lz4
	I0829 20:26:22.995104   67607 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 20:26:22.999667   67607 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 20:26:22.999703   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0829 20:26:20.524280   66989 addons.go:510] duration metric: took 1.639608208s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0829 20:26:21.135090   66989 node_ready.go:53] node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:23.136839   66989 node_ready.go:53] node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:22.825998   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting to get IP...
	I0829 20:26:22.827278   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:22.827766   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:22.827883   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:22.827750   68757 retry.go:31] will retry after 212.207753ms: waiting for machine to come up
	I0829 20:26:23.041113   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.041553   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.041588   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:23.041508   68757 retry.go:31] will retry after 291.9464ms: waiting for machine to come up
	I0829 20:26:23.335081   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.336072   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.336121   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:23.336041   68757 retry.go:31] will retry after 478.578755ms: waiting for machine to come up
	I0829 20:26:23.816669   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.817178   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.817233   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:23.817087   68757 retry.go:31] will retry after 501.093836ms: waiting for machine to come up
	I0829 20:26:24.319836   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:24.320392   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:24.320418   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:24.320343   68757 retry.go:31] will retry after 524.430407ms: waiting for machine to come up
	I0829 20:26:24.846908   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:24.847388   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:24.847418   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:24.847361   68757 retry.go:31] will retry after 701.573237ms: waiting for machine to come up
	I0829 20:26:25.550328   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:25.550786   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:25.550811   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:25.550727   68757 retry.go:31] will retry after 916.084079ms: waiting for machine to come up
	I0829 20:26:26.468529   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:26.468981   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:26.469012   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:26.468921   68757 retry.go:31] will retry after 1.216322833s: waiting for machine to come up
	I0829 20:26:24.727216   67607 crio.go:462] duration metric: took 1.732148589s to copy over tarball
	I0829 20:26:24.727294   67607 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 20:26:27.715640   67607 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.988318238s)
	I0829 20:26:27.715664   67607 crio.go:469] duration metric: took 2.988419957s to extract the tarball
	I0829 20:26:27.715672   67607 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 20:26:27.764192   67607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:27.797388   67607 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 20:26:27.797422   67607 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 20:26:27.797501   67607 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:27.797536   67607 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0829 20:26:27.797549   67607 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:27.797557   67607 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0829 20:26:27.797511   67607 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:27.797629   67607 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:27.797637   67607 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:27.797519   67607 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:27.799128   67607 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:27.799208   67607 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0829 20:26:27.799251   67607 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0829 20:26:27.799361   67607 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:27.799386   67607 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:27.799463   67607 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:27.799697   67607 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:27.799830   67607 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:27.978022   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:27.978296   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:27.981616   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:27.998987   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.001078   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.004185   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.004672   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0829 20:26:28.103885   67607 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0829 20:26:28.103953   67607 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.104013   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.122203   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:28.129983   67607 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0829 20:26:28.130028   67607 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.130076   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.165427   67607 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0829 20:26:28.165470   67607 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.165521   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.199971   67607 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0829 20:26:28.199990   67607 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0829 20:26:28.200015   67607 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.200021   67607 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.200062   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.200105   67607 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0829 20:26:28.200155   67607 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.200199   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.200204   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.200062   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.200113   67607 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0829 20:26:28.200325   67607 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0829 20:26:28.200356   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.329091   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.329139   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.329187   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.329260   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.329316   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.329362   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:26:28.329316   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.484805   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.484857   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.484888   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.484943   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:26:28.484963   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.485009   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.487351   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.615121   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.615187   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.645371   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.645433   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:26:28.645524   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.645573   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.645638   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0829 20:26:28.729141   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0829 20:26:28.762530   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0829 20:26:28.762592   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0829 20:26:28.782117   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0829 20:26:28.782155   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0829 20:26:28.782195   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0829 20:26:28.782229   67607 cache_images.go:92] duration metric: took 984.791099ms to LoadCachedImages
	W0829 20:26:28.782293   67607 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0829 20:26:28.782310   67607 kubeadm.go:934] updating node { 192.168.39.116 8443 v1.20.0 crio true true} ...
	I0829 20:26:28.782452   67607 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-032002 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:26:28.782518   67607 ssh_runner.go:195] Run: crio config
	I0829 20:26:25.635616   66989 node_ready.go:53] node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:26.635463   66989 node_ready.go:49] node "embed-certs-388383" has status "Ready":"True"
	I0829 20:26:26.635488   66989 node_ready.go:38] duration metric: took 7.504259002s for node "embed-certs-388383" to be "Ready" ...
	I0829 20:26:26.635497   66989 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:26:26.641316   66989 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:26.649602   66989 pod_ready.go:93] pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:26.649634   66989 pod_ready.go:82] duration metric: took 8.284428ms for pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:26.649656   66989 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:28.658281   66989 pod_ready.go:103] pod "etcd-embed-certs-388383" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:27.686642   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:27.687071   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:27.687097   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:27.687030   68757 retry.go:31] will retry after 1.410599528s: waiting for machine to come up
	I0829 20:26:29.099622   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:29.100175   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:29.100207   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:29.100083   68757 retry.go:31] will retry after 1.929618787s: waiting for machine to come up
	I0829 20:26:31.031864   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:31.032434   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:31.032467   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:31.032367   68757 retry.go:31] will retry after 1.926271655s: waiting for machine to come up
	I0829 20:26:28.832785   67607 cni.go:84] Creating CNI manager for ""
	I0829 20:26:28.832807   67607 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:28.832824   67607 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:26:28.832843   67607 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.116 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-032002 NodeName:old-k8s-version-032002 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0829 20:26:28.832982   67607 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-032002"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:26:28.833059   67607 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0829 20:26:28.843483   67607 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:26:28.843566   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:26:28.853276   67607 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0829 20:26:28.870579   67607 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:26:28.888053   67607 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0829 20:26:28.905988   67607 ssh_runner.go:195] Run: grep 192.168.39.116	control-plane.minikube.internal$ /etc/hosts
	I0829 20:26:28.910048   67607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:28.924996   67607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:29.075015   67607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:29.095381   67607 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002 for IP: 192.168.39.116
	I0829 20:26:29.095411   67607 certs.go:194] generating shared ca certs ...
	I0829 20:26:29.095430   67607 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:29.095605   67607 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:26:29.095686   67607 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:26:29.095706   67607 certs.go:256] generating profile certs ...
	I0829 20:26:29.095847   67607 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.key
	I0829 20:26:29.095928   67607 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.key.a1a2aebb
	I0829 20:26:29.095984   67607 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.key
	I0829 20:26:29.096135   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:26:29.096184   67607 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:26:29.096198   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:26:29.096227   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:26:29.096259   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:26:29.096299   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:26:29.096378   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:29.097276   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:26:29.144259   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:26:29.171420   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:26:29.198554   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:26:29.230750   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0829 20:26:29.269978   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 20:26:29.299839   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:26:29.333742   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 20:26:29.358352   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:26:29.382648   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:26:29.406773   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:26:29.434106   67607 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:26:29.451913   67607 ssh_runner.go:195] Run: openssl version
	I0829 20:26:29.457722   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:26:29.469147   67607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:26:29.474048   67607 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:26:29.474094   67607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:26:29.480082   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:26:29.491083   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:26:29.501994   67607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:29.508594   67607 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:29.508643   67607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:29.516331   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:26:29.531067   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:26:29.543998   67607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:26:29.548781   67607 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:26:29.548845   67607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:26:29.555052   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:26:29.567902   67607 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:26:29.572879   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:26:29.579506   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:26:29.585887   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:26:29.592262   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:26:29.598566   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:26:29.604672   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:26:29.610830   67607 kubeadm.go:392] StartCluster: {Name:old-k8s-version-032002 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:26:29.612915   67607 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:26:29.613015   67607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:29.655224   67607 cri.go:89] found id: ""
	I0829 20:26:29.655314   67607 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:26:29.666216   67607 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:26:29.666241   67607 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:26:29.666292   67607 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:26:29.676908   67607 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:26:29.678276   67607 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-032002" does not appear in /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:26:29.679313   67607 kubeconfig.go:62] /home/jenkins/minikube-integration/19530-11185/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-032002" cluster setting kubeconfig missing "old-k8s-version-032002" context setting]
	I0829 20:26:29.680756   67607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:29.764872   67607 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:26:29.776873   67607 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.116
	I0829 20:26:29.776914   67607 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:26:29.776926   67607 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:26:29.776987   67607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:29.819268   67607 cri.go:89] found id: ""
	I0829 20:26:29.819347   67607 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:26:29.840386   67607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:26:29.851624   67607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:26:29.851650   67607 kubeadm.go:157] found existing configuration files:
	
	I0829 20:26:29.851710   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:26:29.861439   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:26:29.861504   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:26:29.871594   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:26:29.881126   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:26:29.881199   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:26:29.890984   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:26:29.900838   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:26:29.900913   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:26:29.910677   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:26:29.920008   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:26:29.920073   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:26:29.929631   67607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:26:29.939864   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:30.096029   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:30.816696   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:31.043310   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:31.139291   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:31.248095   67607 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:26:31.248190   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:31.749101   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:32.248718   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:32.748783   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:33.248254   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:33.748557   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:30.180025   66989 pod_ready.go:93] pod "etcd-embed-certs-388383" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:30.180056   66989 pod_ready.go:82] duration metric: took 3.530390258s for pod "etcd-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:30.180069   66989 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.187272   66989 pod_ready.go:93] pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:32.187300   66989 pod_ready.go:82] duration metric: took 2.007222016s for pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.187313   66989 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.192038   66989 pod_ready.go:93] pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:32.192062   66989 pod_ready.go:82] duration metric: took 4.740656ms for pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.192075   66989 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fcxs4" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.196712   66989 pod_ready.go:93] pod "kube-proxy-fcxs4" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:32.196736   66989 pod_ready.go:82] duration metric: took 4.653538ms for pod "kube-proxy-fcxs4" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.196748   66989 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.200491   66989 pod_ready.go:93] pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:32.200517   66989 pod_ready.go:82] duration metric: took 3.758002ms for pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.200528   66989 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:34.207857   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:32.960872   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:32.961256   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:32.961284   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:32.961208   68757 retry.go:31] will retry after 2.304628323s: waiting for machine to come up
	I0829 20:26:35.267593   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:35.268009   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:35.268041   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:35.267970   68757 retry.go:31] will retry after 3.753063387s: waiting for machine to come up
	I0829 20:26:34.249231   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:34.748279   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:35.249171   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:35.748943   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:36.249181   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:36.748307   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:37.248484   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:37.748261   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:38.248332   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:38.748423   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:36.705814   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:38.708205   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:40.175557   66841 start.go:364] duration metric: took 53.54411059s to acquireMachinesLock for "no-preload-397724"
	I0829 20:26:40.175617   66841 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:26:40.175626   66841 fix.go:54] fixHost starting: 
	I0829 20:26:40.176060   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:40.176098   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:40.193828   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45897
	I0829 20:26:40.194231   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:40.194840   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:26:40.194867   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:40.195175   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:40.195364   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:40.195528   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:26:40.197109   66841 fix.go:112] recreateIfNeeded on no-preload-397724: state=Stopped err=<nil>
	I0829 20:26:40.197128   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	W0829 20:26:40.197278   66841 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:26:40.199263   66841 out.go:177] * Restarting existing kvm2 VM for "no-preload-397724" ...
	I0829 20:26:39.023902   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.024374   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Found IP for machine: 192.168.72.140
	I0829 20:26:39.024399   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has current primary IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.024413   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Reserving static IP address...
	I0829 20:26:39.024832   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Reserved static IP address: 192.168.72.140
	I0829 20:26:39.024856   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for SSH to be available...
	I0829 20:26:39.024894   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-145096", mac: "52:54:00:36:fe:e0", ip: "192.168.72.140"} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.024925   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | skip adding static IP to network mk-default-k8s-diff-port-145096 - found existing host DHCP lease matching {name: "default-k8s-diff-port-145096", mac: "52:54:00:36:fe:e0", ip: "192.168.72.140"}
	I0829 20:26:39.024947   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Getting to WaitForSSH function...
	I0829 20:26:39.026796   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.027100   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.027129   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.027265   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Using SSH client type: external
	I0829 20:26:39.027288   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa (-rw-------)
	I0829 20:26:39.027318   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:39.027333   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | About to run SSH command:
	I0829 20:26:39.027346   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | exit 0
	I0829 20:26:39.146830   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:39.147242   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetConfigRaw
	I0829 20:26:39.147931   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetIP
	I0829 20:26:39.150652   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.151055   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.151084   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.151395   68084 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/config.json ...
	I0829 20:26:39.151581   68084 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:39.151601   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:39.151814   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.153861   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.154189   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.154222   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.154351   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.154575   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.154746   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.154875   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.155010   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:39.155219   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:39.155235   68084 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:39.258973   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:39.259006   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetMachineName
	I0829 20:26:39.259261   68084 buildroot.go:166] provisioning hostname "default-k8s-diff-port-145096"
	I0829 20:26:39.259292   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetMachineName
	I0829 20:26:39.259467   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.262018   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.262472   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.262501   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.262707   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.262886   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.263034   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.263185   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.263344   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:39.263530   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:39.263547   68084 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-145096 && echo "default-k8s-diff-port-145096" | sudo tee /etc/hostname
	I0829 20:26:39.379437   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-145096
	
	I0829 20:26:39.379479   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.382263   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.382682   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.382704   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.382913   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.383128   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.383280   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.383389   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.383520   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:39.383675   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:39.383692   68084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-145096' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-145096/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-145096' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:39.491756   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:39.491790   68084 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:39.491855   68084 buildroot.go:174] setting up certificates
	I0829 20:26:39.491869   68084 provision.go:84] configureAuth start
	I0829 20:26:39.491883   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetMachineName
	I0829 20:26:39.492150   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetIP
	I0829 20:26:39.494882   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.495241   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.495269   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.495452   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.497708   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.497980   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.498013   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.498097   68084 provision.go:143] copyHostCerts
	I0829 20:26:39.498157   68084 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:39.498179   68084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:39.498249   68084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:39.498347   68084 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:39.498356   68084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:39.498377   68084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:39.498430   68084 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:39.498437   68084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:39.498455   68084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:39.498507   68084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-145096 san=[127.0.0.1 192.168.72.140 default-k8s-diff-port-145096 localhost minikube]
	I0829 20:26:39.584313   68084 provision.go:177] copyRemoteCerts
	I0829 20:26:39.584372   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:39.584398   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.587054   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.587377   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.587400   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.587630   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.587823   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.587952   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.588087   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:26:39.664394   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:39.688852   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0829 20:26:39.714653   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 20:26:39.737662   68084 provision.go:87] duration metric: took 245.781265ms to configureAuth
	I0829 20:26:39.737687   68084 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:39.737844   68084 config.go:182] Loaded profile config "default-k8s-diff-port-145096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:26:39.737911   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.740391   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.740659   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.740688   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.740911   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.741107   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.741256   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.741434   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.741612   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:39.741777   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:39.741794   68084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:39.954811   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:39.954846   68084 machine.go:96] duration metric: took 803.251945ms to provisionDockerMachine
	I0829 20:26:39.954862   68084 start.go:293] postStartSetup for "default-k8s-diff-port-145096" (driver="kvm2")
	I0829 20:26:39.954877   68084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:39.954898   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:39.955237   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:39.955267   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.958071   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.958575   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.958605   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.958772   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.958969   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.959126   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.959287   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:26:40.037153   68084 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:40.041150   68084 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:40.041176   68084 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:40.041235   68084 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:40.041325   68084 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:40.041415   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:40.050654   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:40.073789   68084 start.go:296] duration metric: took 118.907407ms for postStartSetup
	I0829 20:26:40.073826   68084 fix.go:56] duration metric: took 18.558388385s for fixHost
	I0829 20:26:40.073846   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:40.076397   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.076749   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:40.076789   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.076999   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:40.077200   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:40.077374   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:40.077480   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:40.077598   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:40.077754   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:40.077765   68084 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:40.175410   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963200.123461148
	
	I0829 20:26:40.175431   68084 fix.go:216] guest clock: 1724963200.123461148
	I0829 20:26:40.175437   68084 fix.go:229] Guest: 2024-08-29 20:26:40.123461148 +0000 UTC Remote: 2024-08-29 20:26:40.073830105 +0000 UTC m=+143.488576066 (delta=49.631043ms)
	I0829 20:26:40.175456   68084 fix.go:200] guest clock delta is within tolerance: 49.631043ms
	I0829 20:26:40.175463   68084 start.go:83] releasing machines lock for "default-k8s-diff-port-145096", held for 18.660059953s
	I0829 20:26:40.175497   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:40.175781   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetIP
	I0829 20:26:40.179031   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.179457   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:40.179495   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.179695   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:40.180256   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:40.180444   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:40.180528   68084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:40.180581   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:40.180706   68084 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:40.180729   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:40.183580   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.183819   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.183963   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:40.183989   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.184172   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:40.184174   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:40.184213   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.184345   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:40.184416   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:40.184511   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:40.184624   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:40.184626   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:40.184794   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:26:40.184896   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:26:40.259854   68084 ssh_runner.go:195] Run: systemctl --version
	I0829 20:26:40.290102   68084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:26:40.439112   68084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:26:40.449465   68084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:26:40.449546   68084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:26:40.471182   68084 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:26:40.471209   68084 start.go:495] detecting cgroup driver to use...
	I0829 20:26:40.471276   68084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:26:40.492605   68084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:26:40.508500   68084 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:26:40.508561   68084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:26:40.527534   68084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:26:40.542013   68084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:26:40.663843   68084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:26:40.837228   68084 docker.go:233] disabling docker service ...
	I0829 20:26:40.837293   68084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:26:40.854285   68084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:26:40.870148   68084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:26:41.017156   68084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:26:41.150436   68084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:26:41.165239   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:26:41.184783   68084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 20:26:41.184847   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.197358   68084 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:26:41.197417   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.211222   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.225297   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.237205   68084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:26:41.249875   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.261928   68084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.286145   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.299119   68084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:26:41.313001   68084 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:26:41.313062   68084 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:26:41.335390   68084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:26:41.348803   68084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:41.464387   68084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:26:41.564675   68084 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:26:41.564746   68084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:26:41.569620   68084 start.go:563] Will wait 60s for crictl version
	I0829 20:26:41.569680   68084 ssh_runner.go:195] Run: which crictl
	I0829 20:26:41.573519   68084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:26:41.615105   68084 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:26:41.615190   68084 ssh_runner.go:195] Run: crio --version
	I0829 20:26:41.644597   68084 ssh_runner.go:195] Run: crio --version
	I0829 20:26:41.678211   68084 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 20:26:39.248306   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:39.748958   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:40.248975   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:40.748948   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:41.249144   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:41.749013   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:42.248363   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:42.748624   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:43.248833   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:43.748535   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:40.200748   66841 main.go:141] libmachine: (no-preload-397724) Calling .Start
	I0829 20:26:40.200955   66841 main.go:141] libmachine: (no-preload-397724) Ensuring networks are active...
	I0829 20:26:40.201793   66841 main.go:141] libmachine: (no-preload-397724) Ensuring network default is active
	I0829 20:26:40.202128   66841 main.go:141] libmachine: (no-preload-397724) Ensuring network mk-no-preload-397724 is active
	I0829 20:26:40.202729   66841 main.go:141] libmachine: (no-preload-397724) Getting domain xml...
	I0829 20:26:40.203538   66841 main.go:141] libmachine: (no-preload-397724) Creating domain...
	I0829 20:26:41.516739   66841 main.go:141] libmachine: (no-preload-397724) Waiting to get IP...
	I0829 20:26:41.517840   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:41.518273   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:41.518353   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:41.518262   68926 retry.go:31] will retry after 295.070588ms: waiting for machine to come up
	I0829 20:26:41.814782   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:41.815346   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:41.815369   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:41.815291   68926 retry.go:31] will retry after 239.48527ms: waiting for machine to come up
	I0829 20:26:42.056957   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:42.057459   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:42.057509   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:42.057436   68926 retry.go:31] will retry after 452.012872ms: waiting for machine to come up
	I0829 20:26:42.511068   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:42.511551   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:42.511590   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:42.511520   68926 retry.go:31] will retry after 552.227159ms: waiting for machine to come up
	I0829 20:26:43.066096   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:43.066642   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:43.066673   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:43.066605   68926 retry.go:31] will retry after 666.699647ms: waiting for machine to come up
	I0829 20:26:43.734695   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:43.735402   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:43.735430   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:43.735309   68926 retry.go:31] will retry after 770.756485ms: waiting for machine to come up
	I0829 20:26:40.709553   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:42.712799   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:41.679441   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetIP
	I0829 20:26:41.682807   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:41.683205   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:41.683236   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:41.683489   68084 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0829 20:26:41.688766   68084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:41.705764   68084 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-145096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:26:41.705918   68084 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:26:41.705977   68084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:41.752884   68084 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 20:26:41.752955   68084 ssh_runner.go:195] Run: which lz4
	I0829 20:26:41.757600   68084 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 20:26:41.762158   68084 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 20:26:41.762188   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 20:26:43.201094   68084 crio.go:462] duration metric: took 1.443534343s to copy over tarball
	I0829 20:26:43.201176   68084 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 20:26:45.400911   68084 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.199703125s)
	I0829 20:26:45.400942   68084 crio.go:469] duration metric: took 2.199820098s to extract the tarball
	I0829 20:26:45.400948   68084 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 20:26:45.439120   68084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:45.482658   68084 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 20:26:45.482679   68084 cache_images.go:84] Images are preloaded, skipping loading
	I0829 20:26:45.482687   68084 kubeadm.go:934] updating node { 192.168.72.140 8444 v1.31.0 crio true true} ...
	I0829 20:26:45.482801   68084 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-145096 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:26:45.482873   68084 ssh_runner.go:195] Run: crio config
	I0829 20:26:45.532108   68084 cni.go:84] Creating CNI manager for ""
	I0829 20:26:45.532132   68084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:45.532146   68084 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:26:45.532169   68084 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.140 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-145096 NodeName:default-k8s-diff-port-145096 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 20:26:45.532310   68084 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.140
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-145096"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:26:45.532367   68084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 20:26:45.542670   68084 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:26:45.542744   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:26:45.552622   68084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0829 20:26:45.569765   68084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:26:45.590972   68084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0829 20:26:45.611421   68084 ssh_runner.go:195] Run: grep 192.168.72.140	control-plane.minikube.internal$ /etc/hosts
	I0829 20:26:45.615585   68084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:45.627911   68084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:45.757504   68084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:45.776103   68084 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096 for IP: 192.168.72.140
	I0829 20:26:45.776128   68084 certs.go:194] generating shared ca certs ...
	I0829 20:26:45.776159   68084 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:45.776337   68084 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:26:45.776388   68084 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:26:45.776400   68084 certs.go:256] generating profile certs ...
	I0829 20:26:45.776511   68084 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/client.key
	I0829 20:26:45.776600   68084 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/apiserver.key.5a49b6b2
	I0829 20:26:45.776650   68084 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/proxy-client.key
	I0829 20:26:45.776788   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:26:45.776827   68084 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:26:45.776840   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:26:45.776869   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:26:45.776940   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:26:45.776977   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:26:45.777035   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:45.777916   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:26:45.823419   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:26:45.868291   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:26:45.905178   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:26:45.934956   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0829 20:26:45.967570   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 20:26:45.994332   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:26:46.019268   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 20:26:46.044075   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:26:46.067906   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:26:46.092513   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:26:46.117686   68084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:26:46.137048   68084 ssh_runner.go:195] Run: openssl version
	I0829 20:26:46.143203   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:26:46.156407   68084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:46.161397   68084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:46.161461   68084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:46.167587   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:26:46.179034   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:26:46.190204   68084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:26:46.194953   68084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:26:46.195010   68084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:26:46.203121   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:26:46.218606   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:26:46.233586   68084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:26:46.240100   68084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:26:46.240155   68084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:26:46.247473   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:26:46.259417   68084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:26:46.264875   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:26:46.270914   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:26:46.277211   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:26:46.283138   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:26:46.289137   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:26:46.295044   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:26:46.301027   68084 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-145096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:26:46.301120   68084 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:26:46.301177   68084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:46.342913   68084 cri.go:89] found id: ""
	I0829 20:26:46.342988   68084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:26:46.354198   68084 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:26:46.354221   68084 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:26:46.354269   68084 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:26:46.364173   68084 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:26:46.365182   68084 kubeconfig.go:125] found "default-k8s-diff-port-145096" server: "https://192.168.72.140:8444"
	I0829 20:26:46.367560   68084 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:26:46.377550   68084 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.140
	I0829 20:26:46.377584   68084 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:26:46.377596   68084 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:26:46.377647   68084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:46.419141   68084 cri.go:89] found id: ""
	I0829 20:26:46.419215   68084 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:26:46.438037   68084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:26:46.449021   68084 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:26:46.449041   68084 kubeadm.go:157] found existing configuration files:
	
	I0829 20:26:46.449093   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0829 20:26:46.459396   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:26:46.459445   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:26:46.469964   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0829 20:26:46.479604   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:26:46.479655   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:26:46.492672   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0829 20:26:46.504656   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:26:46.504714   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:26:46.520206   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0829 20:26:46.532067   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:26:46.532137   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:26:46.541931   68084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:26:46.551973   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:44.248615   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:44.748528   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:45.248257   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:45.748453   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:46.248927   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:46.748628   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:47.248556   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:47.748332   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.248373   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.749111   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:44.507808   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:44.508340   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:44.508375   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:44.508288   68926 retry.go:31] will retry after 754.614285ms: waiting for machine to come up
	I0829 20:26:45.264587   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:45.265039   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:45.265065   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:45.265003   68926 retry.go:31] will retry after 1.3758308s: waiting for machine to come up
	I0829 20:26:46.642139   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:46.642666   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:46.642690   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:46.642612   68926 retry.go:31] will retry after 1.255043608s: waiting for machine to come up
	I0829 20:26:47.899849   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:47.900330   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:47.900360   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:47.900291   68926 retry.go:31] will retry after 1.517293529s: waiting for machine to come up
	I0829 20:26:45.208067   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:48.177040   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:46.668397   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:47.497182   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:47.725573   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:47.785427   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:47.850878   68084 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:26:47.850972   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.351404   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.852023   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.351402   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.367249   68084 api_server.go:72] duration metric: took 1.516370766s to wait for apiserver process to appear ...
	I0829 20:26:49.367283   68084 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:26:49.367312   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:51.595653   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:26:51.595683   68084 api_server.go:103] status: https://192.168.72.140:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:26:51.595698   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:51.609883   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:26:51.609989   68084 api_server.go:103] status: https://192.168.72.140:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:26:51.867454   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:51.872297   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:51.872328   68084 api_server.go:103] status: https://192.168.72.140:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:52.367462   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:52.375300   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:52.375333   68084 api_server.go:103] status: https://192.168.72.140:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:52.867827   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:52.872814   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 200:
	ok
	I0829 20:26:52.881061   68084 api_server.go:141] control plane version: v1.31.0
	I0829 20:26:52.881092   68084 api_server.go:131] duration metric: took 3.513801329s to wait for apiserver health ...
	I0829 20:26:52.881102   68084 cni.go:84] Creating CNI manager for ""
	I0829 20:26:52.881111   68084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:52.882993   68084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:26:49.248291   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.748360   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:50.248427   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:50.749087   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:51.248381   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:51.748488   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:52.249250   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:52.748715   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:53.249248   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:53.748915   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.419781   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:49.420286   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:49.420314   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:49.420244   68926 retry.go:31] will retry after 2.638145598s: waiting for machine to come up
	I0829 20:26:52.059935   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:52.060367   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:52.060411   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:52.060341   68926 retry.go:31] will retry after 2.696474949s: waiting for machine to come up
	I0829 20:26:50.207945   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:52.709407   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:52.884310   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:26:52.901134   68084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:26:52.931390   68084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:26:52.952109   68084 system_pods.go:59] 8 kube-system pods found
	I0829 20:26:52.952154   68084 system_pods.go:61] "coredns-6f6b679f8f-5mkxp" [1d3c3a01-1fa6-4d1d-8750-deef4475ba96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:26:52.952166   68084 system_pods.go:61] "etcd-default-k8s-diff-port-145096" [03096d69-48af-4372-9fa0-5a45dcb9603c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 20:26:52.952177   68084 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-145096" [4be8793a-7934-4c89-a840-49e769673f5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 20:26:52.952188   68084 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-145096" [a3bec7f8-8163-4afa-af53-282ad755b788] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 20:26:52.952202   68084 system_pods.go:61] "kube-proxy-b4ffx" [d97e74d5-21d4-4c96-9d94-77767fc4e609] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 20:26:52.952210   68084 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-145096" [c416b52b-ebf4-4714-bed6-3d25bfaa373c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 20:26:52.952217   68084 system_pods.go:61] "metrics-server-6867b74b74-5kk6q" [e74224b1-8242-4f7f-b8d6-7d9d4839be53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:26:52.952224   68084 system_pods.go:61] "storage-provisioner" [4e97da7c-af4b-40b3-83fb-82b6c2a2adef] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 20:26:52.952236   68084 system_pods.go:74] duration metric: took 20.81979ms to wait for pod list to return data ...
	I0829 20:26:52.952245   68084 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:26:52.961169   68084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:26:52.961202   68084 node_conditions.go:123] node cpu capacity is 2
	I0829 20:26:52.961214   68084 node_conditions.go:105] duration metric: took 8.963546ms to run NodePressure ...
	I0829 20:26:52.961234   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:53.425201   68084 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 20:26:53.429605   68084 kubeadm.go:739] kubelet initialised
	I0829 20:26:53.429625   68084 kubeadm.go:740] duration metric: took 4.401784ms waiting for restarted kubelet to initialise ...
	I0829 20:26:53.429632   68084 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:26:53.434501   68084 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-5mkxp" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:55.442290   68084 pod_ready.go:103] pod "coredns-6f6b679f8f-5mkxp" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:54.248998   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:54.748438   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:55.249066   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:55.749293   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:56.248457   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:56.748509   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:57.248949   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:57.748228   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:58.248717   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:58.748412   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:54.760175   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:54.760689   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:54.760736   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:54.760667   68926 retry.go:31] will retry after 3.651969786s: waiting for machine to come up
	I0829 20:26:58.415601   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.416019   66841 main.go:141] libmachine: (no-preload-397724) Found IP for machine: 192.168.50.214
	I0829 20:26:58.416045   66841 main.go:141] libmachine: (no-preload-397724) Reserving static IP address...
	I0829 20:26:58.416063   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has current primary IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.416507   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "no-preload-397724", mac: "52:54:00:e9:bf:ac", ip: "192.168.50.214"} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.416533   66841 main.go:141] libmachine: (no-preload-397724) DBG | skip adding static IP to network mk-no-preload-397724 - found existing host DHCP lease matching {name: "no-preload-397724", mac: "52:54:00:e9:bf:ac", ip: "192.168.50.214"}
	I0829 20:26:58.416543   66841 main.go:141] libmachine: (no-preload-397724) Reserved static IP address: 192.168.50.214
	I0829 20:26:58.416552   66841 main.go:141] libmachine: (no-preload-397724) Waiting for SSH to be available...
	I0829 20:26:58.416562   66841 main.go:141] libmachine: (no-preload-397724) DBG | Getting to WaitForSSH function...
	I0829 20:26:58.418849   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.419170   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.419199   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.419312   66841 main.go:141] libmachine: (no-preload-397724) DBG | Using SSH client type: external
	I0829 20:26:58.419351   66841 main.go:141] libmachine: (no-preload-397724) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa (-rw-------)
	I0829 20:26:58.419397   66841 main.go:141] libmachine: (no-preload-397724) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:58.419414   66841 main.go:141] libmachine: (no-preload-397724) DBG | About to run SSH command:
	I0829 20:26:58.419444   66841 main.go:141] libmachine: (no-preload-397724) DBG | exit 0
	I0829 20:26:58.542594   66841 main.go:141] libmachine: (no-preload-397724) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:58.542925   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetConfigRaw
	I0829 20:26:58.543582   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetIP
	I0829 20:26:58.546057   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.546384   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.546422   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.546691   66841 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/config.json ...
	I0829 20:26:58.546871   66841 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:58.546890   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:58.547113   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:58.549493   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.549816   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.549854   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.549972   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:58.550140   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.550260   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.550388   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:58.550581   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:58.550805   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:58.550822   66841 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:58.658784   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:58.658827   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:26:58.659063   66841 buildroot.go:166] provisioning hostname "no-preload-397724"
	I0829 20:26:58.659083   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:26:58.659220   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:58.661932   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.662294   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.662320   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.662485   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:58.662695   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.662880   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.663011   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:58.663168   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:58.663343   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:58.663356   66841 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-397724 && echo "no-preload-397724" | sudo tee /etc/hostname
	I0829 20:26:58.790591   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-397724
	
	I0829 20:26:58.790618   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:58.793294   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.793612   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.793639   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.793849   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:58.794035   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.794192   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.794289   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:58.794430   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:58.794656   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:58.794678   66841 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-397724' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-397724/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-397724' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:58.915925   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:58.915958   66841 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:58.915981   66841 buildroot.go:174] setting up certificates
	I0829 20:26:58.915991   66841 provision.go:84] configureAuth start
	I0829 20:26:58.916000   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:26:58.916279   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetIP
	I0829 20:26:58.919034   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.919385   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.919415   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.919523   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:58.921483   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.921805   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.921831   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.922015   66841 provision.go:143] copyHostCerts
	I0829 20:26:58.922062   66841 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:58.922079   66841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:58.922135   66841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:58.922242   66841 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:58.922256   66841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:58.922288   66841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:58.922365   66841 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:58.922375   66841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:58.922400   66841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:58.922491   66841 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.no-preload-397724 san=[127.0.0.1 192.168.50.214 localhost minikube no-preload-397724]
	I0829 20:26:55.206462   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:57.207175   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:59.207454   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:59.264390   66841 provision.go:177] copyRemoteCerts
	I0829 20:26:59.264446   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:59.264467   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.267259   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.267603   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.267626   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.267794   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.268014   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.268190   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.268367   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:26:59.353746   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:59.378289   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0829 20:26:59.402330   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 20:26:59.425412   66841 provision.go:87] duration metric: took 509.408381ms to configureAuth
	I0829 20:26:59.425442   66841 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:59.425616   66841 config.go:182] Loaded profile config "no-preload-397724": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:26:59.425679   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.428148   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.428503   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.428545   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.428698   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.428906   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.429077   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.429227   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.429365   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:59.429511   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:59.429524   66841 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:59.666382   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:59.666408   66841 machine.go:96] duration metric: took 1.11952301s to provisionDockerMachine
	I0829 20:26:59.666422   66841 start.go:293] postStartSetup for "no-preload-397724" (driver="kvm2")
	I0829 20:26:59.666436   66841 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:59.666458   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.666833   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:59.666881   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.669407   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.669725   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.669751   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.669888   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.670073   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.670214   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.670316   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:26:59.753440   66841 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:59.758408   66841 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:59.758431   66841 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:59.758509   66841 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:59.758632   66841 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:59.758753   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:59.768355   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:59.792742   66841 start.go:296] duration metric: took 126.308201ms for postStartSetup
	I0829 20:26:59.792782   66841 fix.go:56] duration metric: took 19.617155195s for fixHost
	I0829 20:26:59.792806   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.795380   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.795744   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.795781   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.795917   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.796124   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.796237   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.796376   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.796488   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:59.796668   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:59.796680   66841 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:59.903539   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963219.868600963
	
	I0829 20:26:59.903564   66841 fix.go:216] guest clock: 1724963219.868600963
	I0829 20:26:59.903574   66841 fix.go:229] Guest: 2024-08-29 20:26:59.868600963 +0000 UTC Remote: 2024-08-29 20:26:59.792787483 +0000 UTC m=+355.719318860 (delta=75.81348ms)
	I0829 20:26:59.903623   66841 fix.go:200] guest clock delta is within tolerance: 75.81348ms
	I0829 20:26:59.903632   66841 start.go:83] releasing machines lock for "no-preload-397724", held for 19.728042303s
	I0829 20:26:59.903676   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.903967   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetIP
	I0829 20:26:59.906798   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.907183   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.907212   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.907378   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.907804   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.907970   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.908038   66841 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:59.908072   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.908324   66841 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:59.908346   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.910843   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.911025   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.911187   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.911215   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.911325   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.911415   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.911437   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.911485   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.911640   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.911649   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.911847   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.911848   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:26:59.911978   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.912119   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:27:00.023116   66841 ssh_runner.go:195] Run: systemctl --version
	I0829 20:27:00.029346   66841 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:27:00.169122   66841 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:27:00.176823   66841 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:27:00.176913   66841 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:27:00.194795   66841 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:27:00.194836   66841 start.go:495] detecting cgroup driver to use...
	I0829 20:27:00.194906   66841 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:27:00.212145   66841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:27:00.226584   66841 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:27:00.226656   66841 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:27:00.240525   66841 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:27:00.256847   66841 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:27:00.371938   66841 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:27:00.516891   66841 docker.go:233] disabling docker service ...
	I0829 20:27:00.516964   66841 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:27:00.531127   66841 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:27:00.543483   66841 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:27:00.672033   66841 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:27:00.794828   66841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:27:00.809204   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:27:00.828484   66841 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 20:27:00.828547   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.839273   66841 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:27:00.839344   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.850336   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.860980   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.871661   66841 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:27:00.884343   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.895190   66841 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.912700   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.923383   66841 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:27:00.934168   66841 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:27:00.934231   66841 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:27:00.948181   66841 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:27:00.959121   66841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:27:01.072055   66841 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:27:01.163024   66841 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:27:01.163104   66841 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:27:01.167949   66841 start.go:563] Will wait 60s for crictl version
	I0829 20:27:01.168011   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.171707   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:27:01.212950   66841 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:27:01.213031   66841 ssh_runner.go:195] Run: crio --version
	I0829 20:27:01.242181   66841 ssh_runner.go:195] Run: crio --version
	I0829 20:27:01.276389   66841 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 20:26:57.441729   68084 pod_ready.go:93] pod "coredns-6f6b679f8f-5mkxp" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:57.441753   68084 pod_ready.go:82] duration metric: took 4.007206558s for pod "coredns-6f6b679f8f-5mkxp" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:57.441762   68084 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:59.448210   68084 pod_ready.go:103] pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:59.248692   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:59.748815   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:00.248257   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:00.748264   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:01.249241   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:01.748894   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:02.249045   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:02.748765   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:03.248902   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:03.748333   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:01.277829   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetIP
	I0829 20:27:01.280762   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:27:01.281144   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:27:01.281171   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:27:01.281367   66841 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0829 20:27:01.285714   66841 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:27:01.297903   66841 kubeadm.go:883] updating cluster {Name:no-preload-397724 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-397724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:27:01.298010   66841 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:27:01.298041   66841 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:27:01.331474   66841 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 20:27:01.331498   66841 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 20:27:01.331566   66841 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:01.331572   66841 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.331609   66841 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.331632   66841 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.331643   66841 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.331615   66841 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0829 20:27:01.331737   66841 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.331758   66841 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.333182   66841 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.333233   66841 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.333206   66841 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.333195   66841 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.333191   66841 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:01.333278   66841 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.333191   66841 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.333333   66841 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0829 20:27:01.507028   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.514096   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.526653   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.530292   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.531828   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.534432   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.550465   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0829 20:27:01.613161   66841 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0829 20:27:01.613209   66841 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.613287   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.631193   66841 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0829 20:27:01.631236   66841 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.631285   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.687868   66841 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0829 20:27:01.687911   66841 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.687967   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.700369   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:01.713036   66841 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0829 20:27:01.713102   66841 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.713159   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.722934   66841 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0829 20:27:01.722991   66841 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.723042   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.722941   66841 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0829 20:27:01.723130   66841 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.723159   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.785242   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.785246   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.785342   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.785391   66841 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0829 20:27:01.785438   66841 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:01.785450   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.785474   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.785479   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.785534   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.925322   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.925371   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.925374   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.925474   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.925518   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.925569   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.925593   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:02.072628   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:02.072690   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:02.072744   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:02.072822   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:02.072867   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:02.176999   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0829 20:27:02.177031   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:02.177503   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:02.177507   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 20:27:02.177572   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0829 20:27:02.177581   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0829 20:27:02.177678   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0829 20:27:02.177682   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 20:27:02.185515   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0829 20:27:02.185585   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:02.185624   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0829 20:27:02.259015   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0829 20:27:02.259076   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0829 20:27:02.259087   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0829 20:27:02.259106   66841 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 20:27:02.259113   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0829 20:27:02.259138   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0829 20:27:02.259147   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 20:27:02.259155   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 20:27:02.259152   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0829 20:27:02.259139   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0829 20:27:02.259157   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 20:27:02.259240   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0829 20:27:01.208076   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:03.208339   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:01.954153   68084 pod_ready.go:103] pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:03.454991   68084 pod_ready.go:93] pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:03.455023   68084 pod_ready.go:82] duration metric: took 6.013253793s for pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:03.455036   68084 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:05.461938   68084 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:04.249082   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:04.748738   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:05.248398   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:05.749056   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:06.248693   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:06.748904   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:07.249145   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:07.749131   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:08.248774   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:08.748444   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:04.630344   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.371149915s)
	I0829 20:27:04.630373   66841 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0: (2.371188324s)
	I0829 20:27:04.630410   66841 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.371191825s)
	I0829 20:27:04.630432   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0829 20:27:04.630413   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0829 20:27:04.630379   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0829 20:27:04.630465   66841 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.371187188s)
	I0829 20:27:04.630478   66841 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 20:27:04.630481   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0829 20:27:04.630561   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 20:27:06.684986   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.054398317s)
	I0829 20:27:06.685019   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0829 20:27:06.685047   66841 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0829 20:27:06.685098   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0829 20:27:05.707657   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:07.708034   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:06.965873   68084 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:06.965904   68084 pod_ready.go:82] duration metric: took 3.51085868s for pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.965918   68084 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.976464   68084 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:06.976489   68084 pod_ready.go:82] duration metric: took 10.562771ms for pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.976502   68084 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b4ffx" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.982178   68084 pod_ready.go:93] pod "kube-proxy-b4ffx" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:06.982197   68084 pod_ready.go:82] duration metric: took 5.687889ms for pod "kube-proxy-b4ffx" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.982205   68084 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.987316   68084 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:06.987333   68084 pod_ready.go:82] duration metric: took 5.122275ms for pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.987342   68084 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:08.994794   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:11.493940   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:09.248746   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:09.748722   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:10.249074   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:10.748647   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:11.248236   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:11.749057   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:12.249227   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:12.748688   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:13.249248   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:13.749298   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:10.365120   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.679993065s)
	I0829 20:27:10.365150   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0829 20:27:10.365182   66841 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0829 20:27:10.365256   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0829 20:27:12.122371   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.757087653s)
	I0829 20:27:12.122409   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0829 20:27:12.122434   66841 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 20:27:12.122564   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 20:27:13.575108   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.45251018s)
	I0829 20:27:13.575137   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0829 20:27:13.575165   66841 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 20:27:13.575210   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 20:27:09.708364   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:11.708491   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:14.207383   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:13.494124   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:15.993564   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:14.249254   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:14.748957   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:15.249229   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:15.749137   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:16.248967   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:16.748254   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:17.248929   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:17.748339   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:18.248666   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:18.748712   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:15.742286   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.16705417s)
	I0829 20:27:15.742320   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0829 20:27:15.742348   66841 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0829 20:27:15.742398   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0829 20:27:16.391977   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0829 20:27:16.392017   66841 cache_images.go:123] Successfully loaded all cached images
	I0829 20:27:16.392022   66841 cache_images.go:92] duration metric: took 15.060512795s to LoadCachedImages
	I0829 20:27:16.392034   66841 kubeadm.go:934] updating node { 192.168.50.214 8443 v1.31.0 crio true true} ...
	I0829 20:27:16.392139   66841 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-397724 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-397724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:27:16.392203   66841 ssh_runner.go:195] Run: crio config
	I0829 20:27:16.445382   66841 cni.go:84] Creating CNI manager for ""
	I0829 20:27:16.445406   66841 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:27:16.445420   66841 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:27:16.445448   66841 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.214 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-397724 NodeName:no-preload-397724 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 20:27:16.445612   66841 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-397724"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:27:16.445671   66841 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 20:27:16.456505   66841 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:27:16.456560   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:27:16.467361   66841 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0829 20:27:16.484700   66841 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:27:16.503026   66841 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0829 20:27:16.519867   66841 ssh_runner.go:195] Run: grep 192.168.50.214	control-plane.minikube.internal$ /etc/hosts
	I0829 20:27:16.523648   66841 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:27:16.535642   66841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:27:16.671027   66841 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:27:16.688692   66841 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724 for IP: 192.168.50.214
	I0829 20:27:16.688712   66841 certs.go:194] generating shared ca certs ...
	I0829 20:27:16.688727   66841 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:27:16.688883   66841 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:27:16.688944   66841 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:27:16.688957   66841 certs.go:256] generating profile certs ...
	I0829 20:27:16.689053   66841 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/client.key
	I0829 20:27:16.689132   66841 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/apiserver.key.1f535ae9
	I0829 20:27:16.689182   66841 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/proxy-client.key
	I0829 20:27:16.689360   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:27:16.689400   66841 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:27:16.689415   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:27:16.689450   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:27:16.689504   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:27:16.689540   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:27:16.689596   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:27:16.690277   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:27:16.747582   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:27:16.782064   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:27:16.816382   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:27:16.851548   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0829 20:27:16.882919   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 20:27:16.907439   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:27:16.932392   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 20:27:16.957451   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:27:16.982482   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:27:17.006032   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:27:17.030052   66841 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:27:17.047792   66841 ssh_runner.go:195] Run: openssl version
	I0829 20:27:17.053922   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:27:17.065219   66841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:27:17.069592   66841 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:27:17.069647   66841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:27:17.075853   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:27:17.086727   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:27:17.097935   66841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:27:17.102198   66841 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:27:17.102252   66841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:27:17.108031   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:27:17.119868   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:27:17.131513   66841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:27:17.136434   66841 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:27:17.136497   66841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:27:17.142219   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:27:17.153448   66841 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:27:17.158375   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:27:17.165156   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:27:17.170927   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:27:17.176669   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:27:17.182293   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:27:17.187936   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:27:17.193572   66841 kubeadm.go:392] StartCluster: {Name:no-preload-397724 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-397724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:27:17.193682   66841 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:27:17.193754   66841 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:27:17.238327   66841 cri.go:89] found id: ""
	I0829 20:27:17.238392   66841 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:27:17.248923   66841 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:27:17.248943   66841 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:27:17.248984   66841 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:27:17.263143   66841 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:27:17.264260   66841 kubeconfig.go:125] found "no-preload-397724" server: "https://192.168.50.214:8443"
	I0829 20:27:17.266448   66841 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:27:17.276347   66841 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.214
	I0829 20:27:17.276378   66841 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:27:17.276389   66841 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:27:17.276440   66841 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:27:17.311409   66841 cri.go:89] found id: ""
	I0829 20:27:17.311476   66841 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:27:17.329204   66841 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:27:17.339063   66841 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:27:17.339079   66841 kubeadm.go:157] found existing configuration files:
	
	I0829 20:27:17.339118   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:27:17.348268   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:27:17.348324   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:27:17.357596   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:27:17.366504   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:27:17.366575   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:27:17.376068   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:27:17.385156   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:27:17.385220   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:27:17.394890   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:27:17.404213   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:27:17.404283   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:27:17.413669   66841 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:27:17.423307   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:17.536003   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:17.990605   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:18.217809   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:18.297100   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:18.421185   66841 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:27:18.421283   66841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:18.922043   66841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:16.209618   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:18.707544   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:17.993609   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:19.994469   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:19.248924   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:19.748958   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:20.248851   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:20.748547   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:21.248298   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:21.748802   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:22.248680   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:22.748271   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:23.248491   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:23.748803   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:19.422030   66841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:19.442023   66841 api_server.go:72] duration metric: took 1.020839747s to wait for apiserver process to appear ...
	I0829 20:27:19.442047   66841 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:27:19.442070   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:22.444156   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:27:22.444192   66841 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:27:22.444211   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:22.466228   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:27:22.466258   66841 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:27:22.942835   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:22.949338   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:27:22.949360   66841 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:27:23.443069   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:23.447845   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:27:23.447876   66841 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:27:23.942372   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:23.946517   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 200:
	ok
	I0829 20:27:23.953497   66841 api_server.go:141] control plane version: v1.31.0
	I0829 20:27:23.953522   66841 api_server.go:131] duration metric: took 4.511467637s to wait for apiserver health ...
	I0829 20:27:23.953530   66841 cni.go:84] Creating CNI manager for ""
	I0829 20:27:23.953536   66841 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:27:23.955180   66841 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:27:23.956396   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:27:23.969429   66841 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:27:24.000989   66841 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:27:24.014200   66841 system_pods.go:59] 8 kube-system pods found
	I0829 20:27:24.014233   66841 system_pods.go:61] "coredns-6f6b679f8f-g7xxs" [f0148527-2146-4153-aa20-5ac97b664027] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:27:24.014240   66841 system_pods.go:61] "etcd-no-preload-397724" [f04b5ee4-f439-470a-b298-1a9ed569db70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 20:27:24.014248   66841 system_pods.go:61] "kube-apiserver-no-preload-397724" [2328f327-1744-4785-9266-3f992b977ef8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 20:27:24.014254   66841 system_pods.go:61] "kube-controller-manager-no-preload-397724" [0e63f04d-8627-45e9-ac80-70a0fe63f5db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 20:27:24.014260   66841 system_pods.go:61] "kube-proxy-57kbt" [9f85ce17-85a0-4a52-bdaf-4e3aee4d1a98] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 20:27:24.014267   66841 system_pods.go:61] "kube-scheduler-no-preload-397724" [106821c6-2444-470a-bac1-78838c0b1982] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 20:27:24.014273   66841 system_pods.go:61] "metrics-server-6867b74b74-668dg" [e3f3ab24-7777-40b0-a54c-00a294e7e68e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:27:24.014280   66841 system_pods.go:61] "storage-provisioner" [146bd02a-8f50-4d19-a188-4adc2bcc0a43] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 20:27:24.014288   66841 system_pods.go:74] duration metric: took 13.275941ms to wait for pod list to return data ...
	I0829 20:27:24.014298   66841 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:27:24.018932   66841 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:27:24.018956   66841 node_conditions.go:123] node cpu capacity is 2
	I0829 20:27:24.018966   66841 node_conditions.go:105] duration metric: took 4.661993ms to run NodePressure ...
	I0829 20:27:24.018981   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:21.207144   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:23.208728   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:22.493988   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:24.494152   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:24.248456   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:24.748347   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:25.248337   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:25.748905   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:26.248912   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:26.749302   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:27.249058   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:27.749105   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:28.248548   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:28.748298   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:24.305237   66841 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 20:27:24.310640   66841 kubeadm.go:739] kubelet initialised
	I0829 20:27:24.310666   66841 kubeadm.go:740] duration metric: took 5.402212ms waiting for restarted kubelet to initialise ...
	I0829 20:27:24.310679   66841 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:27:24.316568   66841 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:26.325035   66841 pod_ready.go:103] pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:28.336627   66841 pod_ready.go:103] pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:25.706496   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:27.708228   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:26.992949   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:28.993682   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:30.993877   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:29.248994   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:29.749020   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:30.248983   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:30.748247   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:31.249052   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:31.249133   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:31.293442   67607 cri.go:89] found id: ""
	I0829 20:27:31.293466   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.293473   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:31.293479   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:31.293527   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:31.333976   67607 cri.go:89] found id: ""
	I0829 20:27:31.333999   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.334006   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:31.334011   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:31.334055   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:31.373680   67607 cri.go:89] found id: ""
	I0829 20:27:31.373707   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.373715   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:31.373720   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:31.373766   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:31.407798   67607 cri.go:89] found id: ""
	I0829 20:27:31.407824   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.407832   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:31.407837   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:31.407893   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:31.444409   67607 cri.go:89] found id: ""
	I0829 20:27:31.444437   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.444445   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:31.444451   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:31.444512   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:31.479313   67607 cri.go:89] found id: ""
	I0829 20:27:31.479333   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.479341   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:31.479347   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:31.479403   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:31.516056   67607 cri.go:89] found id: ""
	I0829 20:27:31.516089   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.516100   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:31.516108   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:31.516168   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:31.555324   67607 cri.go:89] found id: ""
	I0829 20:27:31.555349   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.555357   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:31.555365   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:31.555375   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:31.626397   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:31.626434   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:31.672006   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:31.672038   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:31.724691   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:31.724727   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:31.740283   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:31.740324   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:31.874007   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:29.824509   66841 pod_ready.go:93] pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:29.824530   66841 pod_ready.go:82] duration metric: took 5.507939145s for pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:29.824547   66841 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:31.833646   66841 pod_ready.go:103] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:30.207213   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:32.706352   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:32.993932   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:35.494511   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:34.374203   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:34.387817   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:34.387888   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:34.423254   67607 cri.go:89] found id: ""
	I0829 20:27:34.423279   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.423286   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:34.423296   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:34.423343   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:34.457741   67607 cri.go:89] found id: ""
	I0829 20:27:34.457768   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.457775   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:34.457781   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:34.457827   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:34.498432   67607 cri.go:89] found id: ""
	I0829 20:27:34.498457   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.498464   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:34.498469   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:34.498523   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:34.534290   67607 cri.go:89] found id: ""
	I0829 20:27:34.534317   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.534324   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:34.534330   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:34.534380   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:34.570878   67607 cri.go:89] found id: ""
	I0829 20:27:34.570909   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.570919   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:34.570928   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:34.570986   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:34.615735   67607 cri.go:89] found id: ""
	I0829 20:27:34.615762   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.615769   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:34.615775   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:34.615824   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:34.656667   67607 cri.go:89] found id: ""
	I0829 20:27:34.656706   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.656721   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:34.656730   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:34.656779   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:34.708906   67607 cri.go:89] found id: ""
	I0829 20:27:34.708928   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.708937   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:34.708947   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:34.708962   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:34.767382   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:34.767417   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:34.786523   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:34.786574   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:34.872832   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:34.872857   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:34.872871   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:34.954581   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:34.954620   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:37.497810   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:37.511479   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:37.511539   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:37.547930   67607 cri.go:89] found id: ""
	I0829 20:27:37.547962   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.547972   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:37.547980   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:37.548035   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:37.585281   67607 cri.go:89] found id: ""
	I0829 20:27:37.585304   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.585312   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:37.585318   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:37.585365   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:37.622201   67607 cri.go:89] found id: ""
	I0829 20:27:37.622229   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.622241   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:37.622246   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:37.622295   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:37.657248   67607 cri.go:89] found id: ""
	I0829 20:27:37.657274   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.657281   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:37.657289   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:37.657335   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:37.691674   67607 cri.go:89] found id: ""
	I0829 20:27:37.691703   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.691711   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:37.691716   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:37.691764   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:37.729523   67607 cri.go:89] found id: ""
	I0829 20:27:37.729548   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.729557   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:37.729562   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:37.729609   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:37.764601   67607 cri.go:89] found id: ""
	I0829 20:27:37.764629   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.764637   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:37.764643   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:37.764705   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:37.799228   67607 cri.go:89] found id: ""
	I0829 20:27:37.799259   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.799270   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:37.799281   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:37.799301   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:37.848128   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:37.848158   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:37.862610   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:37.862640   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:37.936859   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:37.936888   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:37.936903   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:38.013647   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:38.013681   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:34.331889   66841 pod_ready.go:103] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:36.332334   66841 pod_ready.go:103] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:37.329545   66841 pod_ready.go:93] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.329566   66841 pod_ready.go:82] duration metric: took 7.50501178s for pod "etcd-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.329576   66841 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.333442   66841 pod_ready.go:93] pod "kube-apiserver-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.333458   66841 pod_ready.go:82] duration metric: took 3.876755ms for pod "kube-apiserver-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.333467   66841 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.336952   66841 pod_ready.go:93] pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.336968   66841 pod_ready.go:82] duration metric: took 3.49531ms for pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.336976   66841 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-57kbt" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.340368   66841 pod_ready.go:93] pod "kube-proxy-57kbt" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.340383   66841 pod_ready.go:82] duration metric: took 3.401844ms for pod "kube-proxy-57kbt" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.340396   66841 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.344111   66841 pod_ready.go:93] pod "kube-scheduler-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.344125   66841 pod_ready.go:82] duration metric: took 3.723924ms for pod "kube-scheduler-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.344132   66841 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:34.708682   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:37.206876   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:37.997827   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:40.494840   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:40.551395   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:40.568100   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:40.568181   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:40.616582   67607 cri.go:89] found id: ""
	I0829 20:27:40.616611   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.616623   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:40.616631   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:40.616695   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:40.690580   67607 cri.go:89] found id: ""
	I0829 20:27:40.690620   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.690631   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:40.690638   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:40.690695   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:40.733624   67607 cri.go:89] found id: ""
	I0829 20:27:40.733653   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.733662   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:40.733670   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:40.733733   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:40.767499   67607 cri.go:89] found id: ""
	I0829 20:27:40.767528   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.767538   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:40.767546   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:40.767619   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:40.806973   67607 cri.go:89] found id: ""
	I0829 20:27:40.807002   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.807009   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:40.807015   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:40.807079   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:40.842311   67607 cri.go:89] found id: ""
	I0829 20:27:40.842334   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.842341   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:40.842347   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:40.842401   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:40.880208   67607 cri.go:89] found id: ""
	I0829 20:27:40.880238   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.880248   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:40.880255   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:40.880309   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:40.918395   67607 cri.go:89] found id: ""
	I0829 20:27:40.918424   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.918435   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:40.918445   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:40.918459   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:40.972396   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:40.972437   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:40.986136   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:40.986169   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:41.064600   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:41.064623   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:41.064634   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:41.146653   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:41.146687   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:43.687773   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:43.701576   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:43.701645   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:43.737259   67607 cri.go:89] found id: ""
	I0829 20:27:43.737282   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.737289   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:43.737299   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:43.737346   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:43.772678   67607 cri.go:89] found id: ""
	I0829 20:27:43.772702   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.772709   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:43.772714   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:43.772776   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:43.806788   67607 cri.go:89] found id: ""
	I0829 20:27:43.806821   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.806831   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:43.806839   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:43.806900   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:39.350484   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:41.352279   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:43.850564   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:39.707977   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:42.207630   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:42.993571   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:44.994696   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:43.841738   67607 cri.go:89] found id: ""
	I0829 20:27:43.841759   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.841767   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:43.841772   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:43.841829   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:43.878420   67607 cri.go:89] found id: ""
	I0829 20:27:43.878449   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.878459   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:43.878466   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:43.878527   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:43.914307   67607 cri.go:89] found id: ""
	I0829 20:27:43.914335   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.914345   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:43.914352   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:43.914413   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:43.958827   67607 cri.go:89] found id: ""
	I0829 20:27:43.958853   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.958865   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:43.958871   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:43.958935   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:43.997397   67607 cri.go:89] found id: ""
	I0829 20:27:43.997423   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.997432   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:43.997442   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:43.997455   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:44.049245   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:44.049280   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:44.063473   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:44.063511   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:44.131628   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:44.131651   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:44.131666   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:44.210826   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:44.210854   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:46.754905   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:46.769531   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:46.769588   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:46.805245   67607 cri.go:89] found id: ""
	I0829 20:27:46.805272   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.805280   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:46.805285   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:46.805338   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:46.843606   67607 cri.go:89] found id: ""
	I0829 20:27:46.843637   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.843646   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:46.843654   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:46.843710   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:46.880300   67607 cri.go:89] found id: ""
	I0829 20:27:46.880326   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.880333   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:46.880338   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:46.880387   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:46.923537   67607 cri.go:89] found id: ""
	I0829 20:27:46.923562   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.923569   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:46.923574   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:46.923620   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:46.957774   67607 cri.go:89] found id: ""
	I0829 20:27:46.957806   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.957817   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:46.957826   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:46.957887   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:46.996972   67607 cri.go:89] found id: ""
	I0829 20:27:46.996995   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.997005   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:46.997013   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:46.997056   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:47.030560   67607 cri.go:89] found id: ""
	I0829 20:27:47.030588   67607 logs.go:276] 0 containers: []
	W0829 20:27:47.030606   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:47.030612   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:47.030665   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:47.068654   67607 cri.go:89] found id: ""
	I0829 20:27:47.068678   67607 logs.go:276] 0 containers: []
	W0829 20:27:47.068686   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:47.068694   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:47.068706   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:47.082335   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:47.082367   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:47.162792   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:47.162817   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:47.162829   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:47.241456   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:47.241491   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:47.282249   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:47.282274   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:45.850673   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:47.850836   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:44.707198   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:46.707222   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:49.207556   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:46.995302   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:49.498812   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:49.836268   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:49.850415   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:49.850491   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:49.887816   67607 cri.go:89] found id: ""
	I0829 20:27:49.887843   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.887851   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:49.887856   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:49.887916   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:49.923701   67607 cri.go:89] found id: ""
	I0829 20:27:49.923735   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.923745   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:49.923755   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:49.923818   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:49.958197   67607 cri.go:89] found id: ""
	I0829 20:27:49.958225   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.958236   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:49.958244   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:49.958313   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:49.995333   67607 cri.go:89] found id: ""
	I0829 20:27:49.995361   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.995373   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:49.995380   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:49.995439   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:50.034345   67607 cri.go:89] found id: ""
	I0829 20:27:50.034375   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.034382   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:50.034387   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:50.034438   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:50.070324   67607 cri.go:89] found id: ""
	I0829 20:27:50.070355   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.070365   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:50.070374   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:50.070434   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:50.107301   67607 cri.go:89] found id: ""
	I0829 20:27:50.107326   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.107334   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:50.107340   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:50.107400   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:50.144748   67607 cri.go:89] found id: ""
	I0829 20:27:50.144778   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.144788   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:50.144800   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:50.144816   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:50.183576   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:50.183606   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:50.236716   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:50.236750   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:50.251589   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:50.251612   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:50.317816   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:50.317840   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:50.317855   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:52.894572   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:52.908081   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:52.908149   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:52.945272   67607 cri.go:89] found id: ""
	I0829 20:27:52.945299   67607 logs.go:276] 0 containers: []
	W0829 20:27:52.945309   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:52.945317   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:52.945377   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:52.980237   67607 cri.go:89] found id: ""
	I0829 20:27:52.980262   67607 logs.go:276] 0 containers: []
	W0829 20:27:52.980270   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:52.980275   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:52.980325   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:53.017894   67607 cri.go:89] found id: ""
	I0829 20:27:53.017922   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.017929   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:53.017935   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:53.017991   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:53.052577   67607 cri.go:89] found id: ""
	I0829 20:27:53.052603   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.052611   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:53.052616   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:53.052667   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:53.093414   67607 cri.go:89] found id: ""
	I0829 20:27:53.093444   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.093455   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:53.093462   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:53.093523   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:53.130794   67607 cri.go:89] found id: ""
	I0829 20:27:53.130825   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.130837   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:53.130845   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:53.130902   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:53.163793   67607 cri.go:89] found id: ""
	I0829 20:27:53.163819   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.163827   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:53.163832   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:53.163882   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:53.204824   67607 cri.go:89] found id: ""
	I0829 20:27:53.204852   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.204862   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:53.204872   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:53.204885   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:53.243411   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:53.243440   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:53.296611   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:53.296642   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:53.310909   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:53.310943   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:53.385768   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:53.385790   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:53.385801   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:49.851712   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:52.350295   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:51.711115   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:54.207340   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:51.993943   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:53.996334   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:56.494226   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:55.966801   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:55.980852   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:55.980933   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:56.017682   67607 cri.go:89] found id: ""
	I0829 20:27:56.017707   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.017716   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:56.017722   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:56.017767   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:56.051556   67607 cri.go:89] found id: ""
	I0829 20:27:56.051584   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.051594   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:56.051600   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:56.051665   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:56.095301   67607 cri.go:89] found id: ""
	I0829 20:27:56.095330   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.095340   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:56.095348   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:56.095408   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:56.131161   67607 cri.go:89] found id: ""
	I0829 20:27:56.131195   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.131205   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:56.131213   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:56.131269   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:56.166611   67607 cri.go:89] found id: ""
	I0829 20:27:56.166637   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.166645   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:56.166651   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:56.166713   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:56.202818   67607 cri.go:89] found id: ""
	I0829 20:27:56.202846   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.202856   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:56.202864   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:56.202923   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:56.237855   67607 cri.go:89] found id: ""
	I0829 20:27:56.237883   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.237891   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:56.237897   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:56.237955   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:56.272402   67607 cri.go:89] found id: ""
	I0829 20:27:56.272426   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.272433   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:56.272441   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:56.272452   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:56.351628   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:56.351653   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:56.389525   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:56.389559   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:56.444952   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:56.444989   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:56.459731   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:56.459759   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:56.536888   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:54.350358   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:56.350727   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:58.352884   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:56.208050   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:58.706897   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:58.993153   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:00.993544   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:59.037744   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:59.051868   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:59.051938   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:59.087436   67607 cri.go:89] found id: ""
	I0829 20:27:59.087461   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.087467   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:59.087474   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:59.087531   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:59.123729   67607 cri.go:89] found id: ""
	I0829 20:27:59.123757   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.123765   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:59.123771   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:59.123825   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:59.168649   67607 cri.go:89] found id: ""
	I0829 20:27:59.168682   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.168690   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:59.168696   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:59.168753   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:59.209770   67607 cri.go:89] found id: ""
	I0829 20:27:59.209791   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.209803   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:59.209808   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:59.209854   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:59.248358   67607 cri.go:89] found id: ""
	I0829 20:27:59.248384   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.248392   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:59.248398   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:59.248445   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:59.281770   67607 cri.go:89] found id: ""
	I0829 20:27:59.281797   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.281805   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:59.281811   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:59.281870   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:59.317255   67607 cri.go:89] found id: ""
	I0829 20:27:59.317285   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.317295   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:59.317302   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:59.317363   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:59.354301   67607 cri.go:89] found id: ""
	I0829 20:27:59.354324   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.354332   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:59.354339   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:59.354352   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:59.438346   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:59.438382   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:59.482482   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:59.482513   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:59.540926   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:59.540961   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:59.555221   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:59.555258   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:59.622114   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:02.123276   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:02.137435   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:02.137502   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:02.176310   67607 cri.go:89] found id: ""
	I0829 20:28:02.176340   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.176347   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:02.176355   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:02.176414   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:02.216511   67607 cri.go:89] found id: ""
	I0829 20:28:02.216555   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.216562   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:02.216574   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:02.216625   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:02.260116   67607 cri.go:89] found id: ""
	I0829 20:28:02.260149   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.260158   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:02.260164   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:02.260225   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:02.301550   67607 cri.go:89] found id: ""
	I0829 20:28:02.301584   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.301600   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:02.301608   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:02.301692   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:02.335916   67607 cri.go:89] found id: ""
	I0829 20:28:02.335948   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.335959   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:02.335967   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:02.336033   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:02.372479   67607 cri.go:89] found id: ""
	I0829 20:28:02.372507   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.372515   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:02.372522   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:02.372584   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:02.406683   67607 cri.go:89] found id: ""
	I0829 20:28:02.406713   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.406721   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:02.406727   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:02.406774   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:02.443130   67607 cri.go:89] found id: ""
	I0829 20:28:02.443156   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.443164   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:02.443173   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:02.443185   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:02.485747   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:02.485777   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:02.540106   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:02.540143   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:02.556158   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:02.556188   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:02.637870   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:02.637900   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:02.637915   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:00.851416   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:03.351248   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:00.707716   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:02.708204   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:02.994108   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:04.994988   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:05.220330   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:05.233932   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:05.233994   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:05.269046   67607 cri.go:89] found id: ""
	I0829 20:28:05.269072   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.269081   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:05.269087   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:05.269134   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:05.303963   67607 cri.go:89] found id: ""
	I0829 20:28:05.303989   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.303999   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:05.304006   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:05.304065   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:05.340943   67607 cri.go:89] found id: ""
	I0829 20:28:05.340975   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.340985   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:05.340992   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:05.341061   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:05.379551   67607 cri.go:89] found id: ""
	I0829 20:28:05.379582   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.379593   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:05.379601   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:05.379659   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:05.414229   67607 cri.go:89] found id: ""
	I0829 20:28:05.414256   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.414267   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:05.414274   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:05.414339   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:05.450212   67607 cri.go:89] found id: ""
	I0829 20:28:05.450241   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.450251   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:05.450258   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:05.450318   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:05.487415   67607 cri.go:89] found id: ""
	I0829 20:28:05.487451   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.487463   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:05.487470   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:05.487529   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:05.521347   67607 cri.go:89] found id: ""
	I0829 20:28:05.521370   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.521383   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:05.521390   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:05.521402   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:05.572317   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:05.572350   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:05.585651   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:05.585680   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:05.653929   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:05.653950   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:05.653969   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:05.732843   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:05.732873   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:08.281983   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:08.295104   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:08.295166   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:08.328570   67607 cri.go:89] found id: ""
	I0829 20:28:08.328596   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.328605   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:08.328613   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:08.328684   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:08.363567   67607 cri.go:89] found id: ""
	I0829 20:28:08.363595   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.363605   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:08.363613   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:08.363672   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:08.399619   67607 cri.go:89] found id: ""
	I0829 20:28:08.399645   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.399653   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:08.399659   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:08.399707   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:08.439252   67607 cri.go:89] found id: ""
	I0829 20:28:08.439283   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.439294   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:08.439301   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:08.439357   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:08.477730   67607 cri.go:89] found id: ""
	I0829 20:28:08.477754   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.477762   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:08.477768   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:08.477834   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:08.522045   67607 cri.go:89] found id: ""
	I0829 20:28:08.522066   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.522073   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:08.522079   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:08.522137   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:08.560400   67607 cri.go:89] found id: ""
	I0829 20:28:08.560427   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.560434   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:08.560441   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:08.560504   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:08.599111   67607 cri.go:89] found id: ""
	I0829 20:28:08.599140   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.599150   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:08.599161   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:08.599175   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:08.681451   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:08.681487   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:08.722800   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:08.722835   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:08.779058   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:08.779089   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:08.796940   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:08.796963   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 20:28:05.852245   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:08.351402   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:04.708669   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:07.207124   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:07.493431   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:09.493794   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	W0829 20:28:08.868296   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:11.369316   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:11.384150   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:11.384225   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:11.418452   67607 cri.go:89] found id: ""
	I0829 20:28:11.418480   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.418488   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:11.418494   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:11.418555   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:11.451359   67607 cri.go:89] found id: ""
	I0829 20:28:11.451389   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.451400   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:11.451408   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:11.451481   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:11.488408   67607 cri.go:89] found id: ""
	I0829 20:28:11.488436   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.488446   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:11.488453   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:11.488510   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:11.528311   67607 cri.go:89] found id: ""
	I0829 20:28:11.528340   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.528351   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:11.528359   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:11.528412   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:11.571345   67607 cri.go:89] found id: ""
	I0829 20:28:11.571372   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.571382   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:11.571389   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:11.571454   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:11.606812   67607 cri.go:89] found id: ""
	I0829 20:28:11.606839   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.606850   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:11.606857   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:11.606918   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:11.652687   67607 cri.go:89] found id: ""
	I0829 20:28:11.652710   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.652717   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:11.652722   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:11.652781   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:11.687583   67607 cri.go:89] found id: ""
	I0829 20:28:11.687628   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.687645   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:11.687655   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:11.687673   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:11.727052   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:11.727086   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:11.779116   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:11.779155   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:11.792911   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:11.792949   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:11.868415   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:11.868443   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:11.868461   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:10.850225   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:13.351638   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:09.707347   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:11.709556   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:14.206996   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:11.994187   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:14.494457   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:14.447886   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:14.462144   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:14.462221   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:14.499160   67607 cri.go:89] found id: ""
	I0829 20:28:14.499185   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.499193   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:14.499200   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:14.499258   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:14.545736   67607 cri.go:89] found id: ""
	I0829 20:28:14.545764   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.545774   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:14.545780   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:14.545844   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:14.583626   67607 cri.go:89] found id: ""
	I0829 20:28:14.583664   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.583674   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:14.583682   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:14.583744   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:14.619876   67607 cri.go:89] found id: ""
	I0829 20:28:14.619909   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.619917   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:14.619923   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:14.619975   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:14.655750   67607 cri.go:89] found id: ""
	I0829 20:28:14.655778   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.655786   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:14.655791   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:14.655848   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:14.690759   67607 cri.go:89] found id: ""
	I0829 20:28:14.690785   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.690795   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:14.690800   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:14.690850   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:14.727238   67607 cri.go:89] found id: ""
	I0829 20:28:14.727269   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.727282   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:14.727289   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:14.727344   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:14.765962   67607 cri.go:89] found id: ""
	I0829 20:28:14.765996   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.766006   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:14.766017   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:14.766033   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:14.835749   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:14.835779   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:14.835797   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:14.914075   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:14.914112   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:14.952684   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:14.952712   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:15.004598   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:15.004635   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:17.518949   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:17.532175   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:17.532250   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:17.569943   67607 cri.go:89] found id: ""
	I0829 20:28:17.569971   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.569979   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:17.569985   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:17.570044   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:17.605472   67607 cri.go:89] found id: ""
	I0829 20:28:17.605502   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.605510   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:17.605515   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:17.605566   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:17.641568   67607 cri.go:89] found id: ""
	I0829 20:28:17.641593   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.641603   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:17.641610   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:17.641669   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:17.680870   67607 cri.go:89] found id: ""
	I0829 20:28:17.680895   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.680905   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:17.680916   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:17.680981   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:17.723546   67607 cri.go:89] found id: ""
	I0829 20:28:17.723576   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.723587   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:17.723594   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:17.723659   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:17.757934   67607 cri.go:89] found id: ""
	I0829 20:28:17.757962   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.757973   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:17.757980   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:17.758028   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:17.792641   67607 cri.go:89] found id: ""
	I0829 20:28:17.792670   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.792679   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:17.792685   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:17.792738   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:17.830776   67607 cri.go:89] found id: ""
	I0829 20:28:17.830800   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.830807   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:17.830815   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:17.830825   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:17.886331   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:17.886377   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:17.900111   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:17.900135   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:17.969538   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:17.969563   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:17.969577   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:18.050609   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:18.050649   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:15.850497   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:17.851663   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:16.707415   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:19.207313   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:16.994325   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:19.494247   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:20.590686   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:20.605066   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:20.605121   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:20.646028   67607 cri.go:89] found id: ""
	I0829 20:28:20.646058   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.646074   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:20.646082   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:20.646143   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:20.683433   67607 cri.go:89] found id: ""
	I0829 20:28:20.683469   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.683479   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:20.683487   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:20.683567   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:20.722737   67607 cri.go:89] found id: ""
	I0829 20:28:20.722765   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.722775   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:20.722782   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:20.722841   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:20.759777   67607 cri.go:89] found id: ""
	I0829 20:28:20.759800   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.759807   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:20.759812   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:20.759864   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:20.799142   67607 cri.go:89] found id: ""
	I0829 20:28:20.799164   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.799170   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:20.799176   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:20.799223   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:20.838331   67607 cri.go:89] found id: ""
	I0829 20:28:20.838357   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.838365   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:20.838371   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:20.838427   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:20.878066   67607 cri.go:89] found id: ""
	I0829 20:28:20.878099   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.878110   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:20.878117   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:20.878175   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:20.928940   67607 cri.go:89] found id: ""
	I0829 20:28:20.928966   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.928975   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:20.928982   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:20.928993   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:20.984435   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:20.984471   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:21.005860   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:21.005900   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:21.084092   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:21.084123   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:21.084138   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:21.165971   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:21.166009   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:23.705033   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:23.718332   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:23.718390   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:23.753594   67607 cri.go:89] found id: ""
	I0829 20:28:23.753625   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.753635   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:23.753650   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:23.753715   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:23.791840   67607 cri.go:89] found id: ""
	I0829 20:28:23.791864   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.791872   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:23.791878   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:23.791930   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:20.350028   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:22.350487   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:21.207839   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:23.707197   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:21.993965   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:23.994879   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:26.493735   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:23.837815   67607 cri.go:89] found id: ""
	I0829 20:28:23.837839   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.837846   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:23.837851   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:23.837908   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:23.873155   67607 cri.go:89] found id: ""
	I0829 20:28:23.873184   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.873194   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:23.873201   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:23.873265   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:23.908728   67607 cri.go:89] found id: ""
	I0829 20:28:23.908757   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.908768   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:23.908774   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:23.908834   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:23.946286   67607 cri.go:89] found id: ""
	I0829 20:28:23.946310   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.946320   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:23.946328   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:23.946392   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:23.983078   67607 cri.go:89] found id: ""
	I0829 20:28:23.983105   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.983115   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:23.983129   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:23.983190   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:24.020601   67607 cri.go:89] found id: ""
	I0829 20:28:24.020634   67607 logs.go:276] 0 containers: []
	W0829 20:28:24.020644   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:24.020654   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:24.020669   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:24.034438   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:24.034463   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:24.103209   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:24.103230   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:24.103243   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:24.182977   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:24.183016   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:24.224743   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:24.224834   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:26.781507   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:26.794301   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:26.794387   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:26.827218   67607 cri.go:89] found id: ""
	I0829 20:28:26.827243   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.827250   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:26.827257   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:26.827303   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:26.862643   67607 cri.go:89] found id: ""
	I0829 20:28:26.862673   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.862685   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:26.862693   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:26.862743   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:26.898127   67607 cri.go:89] found id: ""
	I0829 20:28:26.898159   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.898169   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:26.898177   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:26.898237   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:26.932119   67607 cri.go:89] found id: ""
	I0829 20:28:26.932146   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.932167   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:26.932174   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:26.932241   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:26.966380   67607 cri.go:89] found id: ""
	I0829 20:28:26.966413   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.966421   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:26.966427   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:26.966478   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:27.004350   67607 cri.go:89] found id: ""
	I0829 20:28:27.004372   67607 logs.go:276] 0 containers: []
	W0829 20:28:27.004379   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:27.004386   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:27.004436   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:27.041171   67607 cri.go:89] found id: ""
	I0829 20:28:27.041199   67607 logs.go:276] 0 containers: []
	W0829 20:28:27.041206   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:27.041212   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:27.041257   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:27.073993   67607 cri.go:89] found id: ""
	I0829 20:28:27.074031   67607 logs.go:276] 0 containers: []
	W0829 20:28:27.074041   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:27.074053   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:27.074066   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:27.148169   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:27.148199   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:27.148214   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:27.227174   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:27.227212   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:27.267180   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:27.267230   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:27.319034   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:27.319066   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:24.350754   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:26.850582   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:26.207974   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:28.707820   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:28.494090   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:30.994157   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:29.833497   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:29.846883   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:29.846951   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:29.884133   67607 cri.go:89] found id: ""
	I0829 20:28:29.884163   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.884175   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:29.884182   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:29.884247   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:29.917594   67607 cri.go:89] found id: ""
	I0829 20:28:29.917618   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.917628   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:29.917636   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:29.917696   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:29.952537   67607 cri.go:89] found id: ""
	I0829 20:28:29.952568   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.952576   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:29.952582   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:29.952630   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:29.988410   67607 cri.go:89] found id: ""
	I0829 20:28:29.988441   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.988448   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:29.988454   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:29.988511   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:30.026761   67607 cri.go:89] found id: ""
	I0829 20:28:30.026788   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.026796   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:30.026802   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:30.026861   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:30.063010   67607 cri.go:89] found id: ""
	I0829 20:28:30.063037   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.063046   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:30.063054   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:30.063109   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:30.098067   67607 cri.go:89] found id: ""
	I0829 20:28:30.098093   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.098101   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:30.098107   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:30.098161   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:30.132887   67607 cri.go:89] found id: ""
	I0829 20:28:30.132914   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.132921   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:30.132928   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:30.132940   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:30.184955   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:30.184990   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:30.198966   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:30.199004   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:30.268950   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:30.268977   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:30.268991   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:30.354222   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:30.354260   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:32.896554   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:32.911188   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:32.911271   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:32.945726   67607 cri.go:89] found id: ""
	I0829 20:28:32.945750   67607 logs.go:276] 0 containers: []
	W0829 20:28:32.945758   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:32.945773   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:32.945829   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:32.980234   67607 cri.go:89] found id: ""
	I0829 20:28:32.980267   67607 logs.go:276] 0 containers: []
	W0829 20:28:32.980275   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:32.980281   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:32.980329   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:33.019031   67607 cri.go:89] found id: ""
	I0829 20:28:33.019063   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.019071   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:33.019076   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:33.019126   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:33.056290   67607 cri.go:89] found id: ""
	I0829 20:28:33.056314   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.056322   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:33.056327   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:33.056391   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:33.090038   67607 cri.go:89] found id: ""
	I0829 20:28:33.090068   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.090078   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:33.090086   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:33.090152   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:33.125742   67607 cri.go:89] found id: ""
	I0829 20:28:33.125774   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.125782   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:33.125787   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:33.125849   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:33.159019   67607 cri.go:89] found id: ""
	I0829 20:28:33.159047   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.159058   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:33.159065   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:33.159125   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:33.197900   67607 cri.go:89] found id: ""
	I0829 20:28:33.197925   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.197933   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:33.197941   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:33.197955   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:33.250010   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:33.250040   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:33.263348   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:33.263374   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:33.342037   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:33.342065   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:33.342082   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:33.423324   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:33.423361   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:29.350275   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:31.350994   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:33.850866   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:30.713472   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:33.207271   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:32.995169   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:35.493980   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:35.963734   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:35.978648   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:35.978713   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:36.015326   67607 cri.go:89] found id: ""
	I0829 20:28:36.015350   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.015358   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:36.015364   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:36.015411   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:36.050840   67607 cri.go:89] found id: ""
	I0829 20:28:36.050869   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.050879   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:36.050886   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:36.050947   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:36.084048   67607 cri.go:89] found id: ""
	I0829 20:28:36.084076   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.084084   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:36.084090   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:36.084138   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:36.118655   67607 cri.go:89] found id: ""
	I0829 20:28:36.118682   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.118693   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:36.118702   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:36.118762   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:36.153879   67607 cri.go:89] found id: ""
	I0829 20:28:36.153908   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.153918   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:36.153926   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:36.153988   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:36.199834   67607 cri.go:89] found id: ""
	I0829 20:28:36.199858   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.199866   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:36.199872   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:36.199927   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:36.238098   67607 cri.go:89] found id: ""
	I0829 20:28:36.238129   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.238139   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:36.238146   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:36.238208   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:36.272091   67607 cri.go:89] found id: ""
	I0829 20:28:36.272124   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.272135   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:36.272146   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:36.272162   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:36.338478   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:36.338498   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:36.338510   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:36.418637   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:36.418671   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:36.458167   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:36.458194   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:36.508592   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:36.508630   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:36.351066   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:38.849684   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:35.706813   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:37.708058   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:38.003178   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:40.493065   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:39.022668   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:39.035897   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:39.035971   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:39.071155   67607 cri.go:89] found id: ""
	I0829 20:28:39.071185   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.071196   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:39.071203   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:39.071258   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:39.104135   67607 cri.go:89] found id: ""
	I0829 20:28:39.104177   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.104188   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:39.104206   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:39.104266   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:39.138301   67607 cri.go:89] found id: ""
	I0829 20:28:39.138329   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.138339   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:39.138346   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:39.138404   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:39.172674   67607 cri.go:89] found id: ""
	I0829 20:28:39.172700   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.172708   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:39.172719   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:39.172779   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:39.209810   67607 cri.go:89] found id: ""
	I0829 20:28:39.209836   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.209845   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:39.209852   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:39.209915   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:39.248692   67607 cri.go:89] found id: ""
	I0829 20:28:39.248715   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.248722   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:39.248728   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:39.248798   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:39.284303   67607 cri.go:89] found id: ""
	I0829 20:28:39.284333   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.284343   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:39.284351   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:39.284401   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:39.321346   67607 cri.go:89] found id: ""
	I0829 20:28:39.321375   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.321386   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:39.321396   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:39.321410   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:39.334678   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:39.334710   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:39.421992   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:39.422014   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:39.422027   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:39.503250   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:39.503280   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:39.540623   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:39.540654   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:42.092131   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:42.105440   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:42.105498   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:42.140994   67607 cri.go:89] found id: ""
	I0829 20:28:42.141024   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.141034   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:42.141042   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:42.141102   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:42.175182   67607 cri.go:89] found id: ""
	I0829 20:28:42.175217   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.175228   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:42.175248   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:42.175319   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:42.209251   67607 cri.go:89] found id: ""
	I0829 20:28:42.209281   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.209291   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:42.209299   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:42.209362   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:42.247944   67607 cri.go:89] found id: ""
	I0829 20:28:42.247970   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.247977   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:42.247983   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:42.248028   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:42.285613   67607 cri.go:89] found id: ""
	I0829 20:28:42.285644   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.285651   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:42.285657   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:42.285722   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:42.319826   67607 cri.go:89] found id: ""
	I0829 20:28:42.319851   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.319858   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:42.319864   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:42.319928   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:42.357150   67607 cri.go:89] found id: ""
	I0829 20:28:42.357173   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.357182   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:42.357189   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:42.357243   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:42.392150   67607 cri.go:89] found id: ""
	I0829 20:28:42.392170   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.392178   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:42.392185   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:42.392197   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:42.469240   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:42.469271   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:42.469286   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:42.549165   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:42.549198   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:42.591900   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:42.591930   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:42.642593   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:42.642625   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:40.851544   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:43.350420   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:39.708341   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:42.206888   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:44.207934   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:42.494791   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:44.992992   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:45.157092   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:45.170832   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:45.170916   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:45.207210   67607 cri.go:89] found id: ""
	I0829 20:28:45.207235   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.207244   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:45.207251   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:45.207308   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:45.245321   67607 cri.go:89] found id: ""
	I0829 20:28:45.245352   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.245362   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:45.245379   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:45.245448   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:45.280326   67607 cri.go:89] found id: ""
	I0829 20:28:45.280369   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.280381   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:45.280389   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:45.280451   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:45.318294   67607 cri.go:89] found id: ""
	I0829 20:28:45.318322   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.318333   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:45.318340   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:45.318411   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:45.352903   67607 cri.go:89] found id: ""
	I0829 20:28:45.352925   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.352932   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:45.352938   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:45.352990   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:45.389251   67607 cri.go:89] found id: ""
	I0829 20:28:45.389273   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.389280   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:45.389286   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:45.389340   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:45.424348   67607 cri.go:89] found id: ""
	I0829 20:28:45.424385   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.424397   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:45.424404   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:45.424453   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:45.459058   67607 cri.go:89] found id: ""
	I0829 20:28:45.459087   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.459098   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:45.459109   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:45.459124   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:45.510386   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:45.510423   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:45.524896   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:45.524923   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:45.593987   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:45.594064   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:45.594082   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:45.668738   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:45.668771   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:48.206497   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:48.219625   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:48.219696   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:48.254936   67607 cri.go:89] found id: ""
	I0829 20:28:48.254959   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.254966   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:48.254971   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:48.255018   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:48.290826   67607 cri.go:89] found id: ""
	I0829 20:28:48.290851   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.290859   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:48.290864   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:48.290910   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:48.327508   67607 cri.go:89] found id: ""
	I0829 20:28:48.327533   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.327540   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:48.327546   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:48.327593   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:48.364492   67607 cri.go:89] found id: ""
	I0829 20:28:48.364517   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.364525   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:48.364530   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:48.364580   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:48.400035   67607 cri.go:89] found id: ""
	I0829 20:28:48.400062   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.400072   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:48.400079   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:48.400144   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:48.433999   67607 cri.go:89] found id: ""
	I0829 20:28:48.434026   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.434035   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:48.434043   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:48.434104   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:48.468841   67607 cri.go:89] found id: ""
	I0829 20:28:48.468873   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.468889   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:48.468903   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:48.468971   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:48.506557   67607 cri.go:89] found id: ""
	I0829 20:28:48.506589   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.506598   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:48.506609   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:48.506624   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:48.577023   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:48.577044   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:48.577056   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:48.654372   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:48.654407   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:48.691125   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:48.691152   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:48.746383   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:48.746414   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:45.350581   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:47.351437   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:46.705575   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:48.707018   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:46.993532   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:48.994284   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:51.494177   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:51.260591   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:51.273911   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:51.273974   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:51.311517   67607 cri.go:89] found id: ""
	I0829 20:28:51.311545   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.311553   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:51.311567   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:51.311616   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:51.348220   67607 cri.go:89] found id: ""
	I0829 20:28:51.348247   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.348256   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:51.348264   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:51.348321   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:51.383560   67607 cri.go:89] found id: ""
	I0829 20:28:51.383599   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.383611   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:51.383619   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:51.383680   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:51.419241   67607 cri.go:89] found id: ""
	I0829 20:28:51.419268   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.419278   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:51.419286   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:51.419343   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:51.453954   67607 cri.go:89] found id: ""
	I0829 20:28:51.453979   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.453986   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:51.453992   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:51.454047   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:51.489457   67607 cri.go:89] found id: ""
	I0829 20:28:51.489480   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.489488   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:51.489493   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:51.489544   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:51.524072   67607 cri.go:89] found id: ""
	I0829 20:28:51.524100   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.524107   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:51.524113   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:51.524160   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:51.561238   67607 cri.go:89] found id: ""
	I0829 20:28:51.561263   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.561271   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:51.561279   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:51.561290   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:51.615422   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:51.615462   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:51.632180   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:51.632216   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:51.704335   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:51.704363   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:51.704378   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:51.794219   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:51.794260   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:49.852140   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:52.351142   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:51.205903   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:53.207651   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:53.495412   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:55.993489   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:54.342556   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:54.356325   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:54.356400   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:54.390928   67607 cri.go:89] found id: ""
	I0829 20:28:54.390952   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.390959   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:54.390965   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:54.391011   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:54.426970   67607 cri.go:89] found id: ""
	I0829 20:28:54.427002   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.427013   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:54.427020   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:54.427074   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:54.464121   67607 cri.go:89] found id: ""
	I0829 20:28:54.464155   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.464166   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:54.464174   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:54.464236   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:54.499790   67607 cri.go:89] found id: ""
	I0829 20:28:54.499816   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.499827   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:54.499840   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:54.499889   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:54.537212   67607 cri.go:89] found id: ""
	I0829 20:28:54.537239   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.537249   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:54.537256   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:54.537314   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:54.575370   67607 cri.go:89] found id: ""
	I0829 20:28:54.575399   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.575410   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:54.575417   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:54.575469   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:54.608403   67607 cri.go:89] found id: ""
	I0829 20:28:54.608432   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.608443   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:54.608453   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:54.608514   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:54.645259   67607 cri.go:89] found id: ""
	I0829 20:28:54.645285   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.645292   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:54.645300   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:54.645311   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:54.697022   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:54.697063   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:54.712873   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:54.712914   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:54.814253   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:54.814278   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:54.814295   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:54.896473   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:54.896507   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:57.441648   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:57.455245   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:57.455321   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:57.495365   67607 cri.go:89] found id: ""
	I0829 20:28:57.495397   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.495405   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:57.495411   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:57.495472   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:57.529555   67607 cri.go:89] found id: ""
	I0829 20:28:57.529582   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.529590   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:57.529597   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:57.529667   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:57.564168   67607 cri.go:89] found id: ""
	I0829 20:28:57.564196   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.564208   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:57.564215   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:57.564277   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:57.602057   67607 cri.go:89] found id: ""
	I0829 20:28:57.602089   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.602100   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:57.602108   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:57.602194   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:57.638195   67607 cri.go:89] found id: ""
	I0829 20:28:57.638226   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.638235   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:57.638244   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:57.638307   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:57.674556   67607 cri.go:89] found id: ""
	I0829 20:28:57.674605   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.674615   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:57.674623   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:57.674680   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:57.709256   67607 cri.go:89] found id: ""
	I0829 20:28:57.709282   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.709291   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:57.709298   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:57.709358   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:57.743629   67607 cri.go:89] found id: ""
	I0829 20:28:57.743652   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.743659   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:57.743668   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:57.743679   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:57.789067   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:57.789098   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:57.843372   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:57.843403   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:57.858630   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:57.858661   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:57.927776   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:57.927798   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:57.927814   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:54.850906   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:56.851300   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:55.208638   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:57.707756   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:57.994287   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:00.493343   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:00.508180   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:00.521451   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:00.521529   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:00.557912   67607 cri.go:89] found id: ""
	I0829 20:29:00.557938   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.557945   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:00.557951   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:00.557997   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:00.595186   67607 cri.go:89] found id: ""
	I0829 20:29:00.595215   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.595226   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:00.595237   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:00.595299   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:00.631553   67607 cri.go:89] found id: ""
	I0829 20:29:00.631581   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.631592   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:00.631600   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:00.631660   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:00.666502   67607 cri.go:89] found id: ""
	I0829 20:29:00.666525   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.666551   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:00.666560   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:00.666621   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:00.700797   67607 cri.go:89] found id: ""
	I0829 20:29:00.700824   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.700835   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:00.700842   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:00.700908   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:00.739957   67607 cri.go:89] found id: ""
	I0829 20:29:00.739976   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.739989   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:00.739994   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:00.740035   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:00.800704   67607 cri.go:89] found id: ""
	I0829 20:29:00.800740   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.800750   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:00.800757   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:00.800820   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:00.837678   67607 cri.go:89] found id: ""
	I0829 20:29:00.837704   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.837712   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:00.837720   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:00.837731   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:00.888359   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:00.888391   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:00.903074   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:00.903103   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:00.964865   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:00.964885   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:00.964898   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:01.049351   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:01.049387   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:03.589829   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:03.603120   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:03.603192   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:03.637647   67607 cri.go:89] found id: ""
	I0829 20:29:03.637672   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.637678   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:03.637684   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:03.637732   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:03.673807   67607 cri.go:89] found id: ""
	I0829 20:29:03.673842   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.673852   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:03.673860   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:03.673918   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:03.709490   67607 cri.go:89] found id: ""
	I0829 20:29:03.709516   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.709527   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:03.709533   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:03.709595   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:03.751662   67607 cri.go:89] found id: ""
	I0829 20:29:03.751688   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.751696   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:03.751702   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:03.751751   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:03.787861   67607 cri.go:89] found id: ""
	I0829 20:29:03.787896   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.787908   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:03.787917   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:03.787977   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:59.350888   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:01.850615   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:03.851438   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:00.207912   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:02.707309   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:02.493506   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:04.494305   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:03.824383   67607 cri.go:89] found id: ""
	I0829 20:29:03.824413   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.824431   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:03.824438   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:03.824499   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:03.863904   67607 cri.go:89] found id: ""
	I0829 20:29:03.863929   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.863937   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:03.863943   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:03.863990   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:03.902336   67607 cri.go:89] found id: ""
	I0829 20:29:03.902360   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.902368   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:03.902375   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:03.902386   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:03.951468   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:03.951499   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:03.965789   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:03.965816   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:04.035096   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:04.035119   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:04.035193   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:04.115842   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:04.115876   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:06.662652   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:06.676508   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:06.676583   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:06.713058   67607 cri.go:89] found id: ""
	I0829 20:29:06.713084   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.713093   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:06.713101   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:06.713171   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:06.747513   67607 cri.go:89] found id: ""
	I0829 20:29:06.747544   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.747552   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:06.747557   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:06.747617   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:06.782662   67607 cri.go:89] found id: ""
	I0829 20:29:06.782689   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.782695   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:06.782701   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:06.782758   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:06.818472   67607 cri.go:89] found id: ""
	I0829 20:29:06.818500   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.818510   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:06.818516   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:06.818586   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:06.852928   67607 cri.go:89] found id: ""
	I0829 20:29:06.852954   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.852964   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:06.852974   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:06.853032   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:06.893859   67607 cri.go:89] found id: ""
	I0829 20:29:06.893889   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.893899   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:06.893907   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:06.893969   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:06.931552   67607 cri.go:89] found id: ""
	I0829 20:29:06.931584   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.931594   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:06.931601   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:06.931662   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:06.967210   67607 cri.go:89] found id: ""
	I0829 20:29:06.967243   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.967254   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:06.967266   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:06.967279   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:07.020595   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:07.020631   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:07.034738   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:07.034764   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:07.103726   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:07.103747   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:07.103760   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:07.184727   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:07.184764   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:06.350610   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:08.351571   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:05.207055   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:07.207650   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:06.994653   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:09.493932   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:09.746639   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:09.761228   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:09.761308   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:09.802071   67607 cri.go:89] found id: ""
	I0829 20:29:09.802102   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.802113   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:09.802122   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:09.802180   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:09.837352   67607 cri.go:89] found id: ""
	I0829 20:29:09.837385   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.837395   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:09.837402   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:09.837464   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:09.874951   67607 cri.go:89] found id: ""
	I0829 20:29:09.874980   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.874992   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:09.874999   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:09.875055   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:09.909660   67607 cri.go:89] found id: ""
	I0829 20:29:09.909696   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.909706   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:09.909713   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:09.909777   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:09.949727   67607 cri.go:89] found id: ""
	I0829 20:29:09.949751   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.949759   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:09.949765   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:09.949825   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:09.984576   67607 cri.go:89] found id: ""
	I0829 20:29:09.984609   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.984617   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:09.984623   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:09.984675   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:10.022499   67607 cri.go:89] found id: ""
	I0829 20:29:10.022523   67607 logs.go:276] 0 containers: []
	W0829 20:29:10.022530   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:10.022553   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:10.022624   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:10.064308   67607 cri.go:89] found id: ""
	I0829 20:29:10.064346   67607 logs.go:276] 0 containers: []
	W0829 20:29:10.064356   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:10.064367   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:10.064382   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:10.113505   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:10.113537   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:10.127614   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:10.127640   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:10.200558   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:10.200579   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:10.200592   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:10.292984   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:10.293020   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:12.833100   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:12.846645   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:12.846712   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:12.885396   67607 cri.go:89] found id: ""
	I0829 20:29:12.885423   67607 logs.go:276] 0 containers: []
	W0829 20:29:12.885430   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:12.885436   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:12.885486   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:12.922556   67607 cri.go:89] found id: ""
	I0829 20:29:12.922584   67607 logs.go:276] 0 containers: []
	W0829 20:29:12.922595   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:12.922602   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:12.922688   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:12.965294   67607 cri.go:89] found id: ""
	I0829 20:29:12.965324   67607 logs.go:276] 0 containers: []
	W0829 20:29:12.965335   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:12.965342   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:12.965401   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:13.022911   67607 cri.go:89] found id: ""
	I0829 20:29:13.022934   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.022942   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:13.022948   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:13.023009   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:13.077009   67607 cri.go:89] found id: ""
	I0829 20:29:13.077035   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.077043   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:13.077048   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:13.077095   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:13.114202   67607 cri.go:89] found id: ""
	I0829 20:29:13.114233   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.114243   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:13.114251   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:13.114315   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:13.147025   67607 cri.go:89] found id: ""
	I0829 20:29:13.147049   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.147057   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:13.147063   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:13.147110   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:13.183112   67607 cri.go:89] found id: ""
	I0829 20:29:13.183138   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.183148   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:13.183159   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:13.183173   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:13.240558   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:13.240595   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:13.255563   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:13.255589   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:13.322826   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:13.322846   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:13.322857   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:13.399330   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:13.399365   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:10.850650   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:12.852188   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:09.706791   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:11.707397   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:13.708663   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:11.993311   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:13.994310   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:16.494854   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:15.938467   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:15.951742   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:15.951812   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:15.987492   67607 cri.go:89] found id: ""
	I0829 20:29:15.987517   67607 logs.go:276] 0 containers: []
	W0829 20:29:15.987524   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:15.987530   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:15.987575   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:16.024187   67607 cri.go:89] found id: ""
	I0829 20:29:16.024214   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.024223   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:16.024231   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:16.024291   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:16.058141   67607 cri.go:89] found id: ""
	I0829 20:29:16.058164   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.058171   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:16.058176   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:16.058225   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:16.092390   67607 cri.go:89] found id: ""
	I0829 20:29:16.092414   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.092421   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:16.092427   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:16.092472   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:16.130178   67607 cri.go:89] found id: ""
	I0829 20:29:16.130209   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.130219   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:16.130227   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:16.130289   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:16.163867   67607 cri.go:89] found id: ""
	I0829 20:29:16.163900   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.163907   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:16.163913   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:16.163964   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:16.197764   67607 cri.go:89] found id: ""
	I0829 20:29:16.197792   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.197798   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:16.197804   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:16.197850   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:16.233357   67607 cri.go:89] found id: ""
	I0829 20:29:16.233383   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.233393   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:16.233403   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:16.233418   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:16.285154   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:16.285188   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:16.299057   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:16.299085   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:16.377021   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:16.377041   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:16.377062   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:16.457750   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:16.457796   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:15.350415   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:17.850927   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:16.206841   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:18.207273   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:18.993478   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:21.493806   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:18.999133   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:19.016143   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:19.016223   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:19.049225   67607 cri.go:89] found id: ""
	I0829 20:29:19.049252   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.049259   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:19.049265   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:19.049317   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:19.085237   67607 cri.go:89] found id: ""
	I0829 20:29:19.085297   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.085314   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:19.085325   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:19.085389   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:19.123476   67607 cri.go:89] found id: ""
	I0829 20:29:19.123501   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.123509   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:19.123514   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:19.123571   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:19.159958   67607 cri.go:89] found id: ""
	I0829 20:29:19.159984   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.159993   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:19.160001   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:19.160055   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:19.192385   67607 cri.go:89] found id: ""
	I0829 20:29:19.192410   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.192418   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:19.192423   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:19.192483   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:19.230781   67607 cri.go:89] found id: ""
	I0829 20:29:19.230804   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.230811   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:19.230816   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:19.230868   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:19.264925   67607 cri.go:89] found id: ""
	I0829 20:29:19.264954   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.264964   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:19.264972   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:19.265032   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:19.302461   67607 cri.go:89] found id: ""
	I0829 20:29:19.302484   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.302491   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:19.302499   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:19.302510   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:19.384799   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:19.384833   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:19.425281   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:19.425313   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:19.477380   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:19.477412   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:19.492315   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:19.492350   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:19.563428   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:22.064407   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:22.078609   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:22.078670   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:22.112630   67607 cri.go:89] found id: ""
	I0829 20:29:22.112662   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.112672   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:22.112680   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:22.112741   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:22.149078   67607 cri.go:89] found id: ""
	I0829 20:29:22.149108   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.149117   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:22.149124   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:22.149186   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:22.184568   67607 cri.go:89] found id: ""
	I0829 20:29:22.184596   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.184605   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:22.184613   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:22.184682   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:22.220881   67607 cri.go:89] found id: ""
	I0829 20:29:22.220908   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.220919   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:22.220926   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:22.220987   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:22.256280   67607 cri.go:89] found id: ""
	I0829 20:29:22.256305   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.256314   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:22.256321   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:22.256386   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:22.294546   67607 cri.go:89] found id: ""
	I0829 20:29:22.294580   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.294590   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:22.294597   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:22.294660   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:22.332178   67607 cri.go:89] found id: ""
	I0829 20:29:22.332207   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.332215   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:22.332220   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:22.332266   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:22.368283   67607 cri.go:89] found id: ""
	I0829 20:29:22.368309   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.368317   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:22.368325   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:22.368336   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:22.421800   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:22.421836   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:22.435539   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:22.435565   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:22.504402   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:22.504427   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:22.504441   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:22.588293   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:22.588326   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:19.851801   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:22.351929   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:20.207342   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:22.707546   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:23.493994   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:25.993337   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:25.130766   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:25.144479   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:25.144554   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:25.181606   67607 cri.go:89] found id: ""
	I0829 20:29:25.181636   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.181643   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:25.181649   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:25.181697   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:25.220291   67607 cri.go:89] found id: ""
	I0829 20:29:25.220320   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.220328   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:25.220335   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:25.220447   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:25.260947   67607 cri.go:89] found id: ""
	I0829 20:29:25.260975   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.260983   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:25.260988   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:25.261035   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:25.298200   67607 cri.go:89] found id: ""
	I0829 20:29:25.298232   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.298243   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:25.298256   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:25.298314   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:25.333128   67607 cri.go:89] found id: ""
	I0829 20:29:25.333162   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.333174   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:25.333181   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:25.333232   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:25.368951   67607 cri.go:89] found id: ""
	I0829 20:29:25.368979   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.368989   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:25.368997   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:25.369052   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:25.403687   67607 cri.go:89] found id: ""
	I0829 20:29:25.403715   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.403726   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:25.403734   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:25.403799   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:25.442338   67607 cri.go:89] found id: ""
	I0829 20:29:25.442365   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.442372   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:25.442381   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:25.442395   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:25.456313   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:25.456335   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:25.528709   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:25.528730   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:25.528744   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:25.609976   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:25.610011   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:25.650044   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:25.650071   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:28.202683   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:28.216971   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:28.217046   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:28.256297   67607 cri.go:89] found id: ""
	I0829 20:29:28.256321   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.256329   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:28.256335   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:28.256379   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:28.289396   67607 cri.go:89] found id: ""
	I0829 20:29:28.289420   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.289427   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:28.289433   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:28.289484   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:28.323589   67607 cri.go:89] found id: ""
	I0829 20:29:28.323616   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.323623   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:28.323630   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:28.323676   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:28.362423   67607 cri.go:89] found id: ""
	I0829 20:29:28.362453   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.362463   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:28.362471   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:28.362531   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:28.396967   67607 cri.go:89] found id: ""
	I0829 20:29:28.396990   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.396998   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:28.397003   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:28.397053   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:28.430714   67607 cri.go:89] found id: ""
	I0829 20:29:28.430744   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.430755   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:28.430762   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:28.430831   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:28.468668   67607 cri.go:89] found id: ""
	I0829 20:29:28.468696   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.468707   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:28.468714   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:28.468777   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:28.506678   67607 cri.go:89] found id: ""
	I0829 20:29:28.506705   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.506716   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:28.506727   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:28.506741   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:28.545259   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:28.545287   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:28.598249   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:28.598285   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:28.612385   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:28.612429   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:28.685765   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:28.685792   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:28.685806   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:24.851688   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:27.350456   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:24.708523   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:27.206094   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:29.207859   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:27.995492   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:30.494340   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:31.270074   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:31.284357   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:31.284417   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:31.319530   67607 cri.go:89] found id: ""
	I0829 20:29:31.319558   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.319566   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:31.319571   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:31.319640   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:31.356826   67607 cri.go:89] found id: ""
	I0829 20:29:31.356856   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.356867   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:31.356880   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:31.356934   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:31.390137   67607 cri.go:89] found id: ""
	I0829 20:29:31.390160   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.390167   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:31.390173   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:31.390219   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:31.424939   67607 cri.go:89] found id: ""
	I0829 20:29:31.424972   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.424989   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:31.424997   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:31.425054   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:31.460896   67607 cri.go:89] found id: ""
	I0829 20:29:31.460921   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.460928   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:31.460935   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:31.460985   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:31.498933   67607 cri.go:89] found id: ""
	I0829 20:29:31.498957   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.498967   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:31.498975   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:31.499044   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:31.534953   67607 cri.go:89] found id: ""
	I0829 20:29:31.534985   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.534996   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:31.535003   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:31.535065   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:31.576248   67607 cri.go:89] found id: ""
	I0829 20:29:31.576273   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.576281   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:31.576291   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:31.576307   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:31.628157   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:31.628196   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:31.641564   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:31.641591   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:31.719949   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:31.719973   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:31.719996   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:31.795682   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:31.795716   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:29.351248   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:31.351424   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:33.851397   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:31.707552   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:34.207468   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:32.993432   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:34.993634   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:34.333468   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:34.347294   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:34.347370   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:34.384885   67607 cri.go:89] found id: ""
	I0829 20:29:34.384910   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.384921   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:34.384928   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:34.384991   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:34.422309   67607 cri.go:89] found id: ""
	I0829 20:29:34.422341   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.422351   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:34.422358   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:34.422417   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:34.459800   67607 cri.go:89] found id: ""
	I0829 20:29:34.459826   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.459834   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:34.459840   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:34.459905   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:34.495600   67607 cri.go:89] found id: ""
	I0829 20:29:34.495624   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.495633   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:34.495647   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:34.495708   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:34.531749   67607 cri.go:89] found id: ""
	I0829 20:29:34.531777   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.531788   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:34.531795   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:34.531856   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:34.571057   67607 cri.go:89] found id: ""
	I0829 20:29:34.571088   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.571098   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:34.571105   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:34.571168   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:34.609645   67607 cri.go:89] found id: ""
	I0829 20:29:34.609676   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.609687   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:34.609695   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:34.609753   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:34.647199   67607 cri.go:89] found id: ""
	I0829 20:29:34.647233   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.647244   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:34.647255   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:34.647269   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:34.661390   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:34.661420   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:34.737590   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:34.737613   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:34.737625   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:34.820682   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:34.820721   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:34.861697   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:34.861723   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:37.412384   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:37.426081   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:37.426162   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:37.461302   67607 cri.go:89] found id: ""
	I0829 20:29:37.461332   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.461342   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:37.461349   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:37.461416   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:37.500869   67607 cri.go:89] found id: ""
	I0829 20:29:37.500898   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.500908   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:37.500915   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:37.500970   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:37.536908   67607 cri.go:89] found id: ""
	I0829 20:29:37.536932   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.536942   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:37.536949   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:37.537010   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:37.571939   67607 cri.go:89] found id: ""
	I0829 20:29:37.571969   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.571979   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:37.571987   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:37.572048   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:37.607834   67607 cri.go:89] found id: ""
	I0829 20:29:37.607864   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.607883   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:37.607891   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:37.607952   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:37.643932   67607 cri.go:89] found id: ""
	I0829 20:29:37.643963   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.643971   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:37.643978   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:37.644037   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:37.678148   67607 cri.go:89] found id: ""
	I0829 20:29:37.678177   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.678188   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:37.678195   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:37.678257   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:37.713170   67607 cri.go:89] found id: ""
	I0829 20:29:37.713195   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.713209   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:37.713219   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:37.713233   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:37.752538   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:37.752567   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:37.802888   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:37.802923   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:37.816546   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:37.816585   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:37.891647   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:37.891667   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:37.891680   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:35.851668   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:38.351371   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:36.208220   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:38.708523   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:36.994441   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:39.493291   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:40.472354   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:40.486186   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:40.486252   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:40.520935   67607 cri.go:89] found id: ""
	I0829 20:29:40.520963   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.520971   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:40.520977   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:40.521037   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:40.561399   67607 cri.go:89] found id: ""
	I0829 20:29:40.561428   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.561440   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:40.561447   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:40.561514   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:40.601821   67607 cri.go:89] found id: ""
	I0829 20:29:40.601846   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.601855   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:40.601862   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:40.601918   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:40.636429   67607 cri.go:89] found id: ""
	I0829 20:29:40.636454   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.636462   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:40.636468   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:40.636525   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:40.670781   67607 cri.go:89] found id: ""
	I0829 20:29:40.670816   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.670828   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:40.670836   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:40.670912   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:40.706635   67607 cri.go:89] found id: ""
	I0829 20:29:40.706663   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.706674   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:40.706682   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:40.706739   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:40.741657   67607 cri.go:89] found id: ""
	I0829 20:29:40.741687   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.741695   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:40.741707   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:40.741770   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:40.777028   67607 cri.go:89] found id: ""
	I0829 20:29:40.777057   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.777066   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:40.777077   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:40.777093   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:40.829387   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:40.829424   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:40.843928   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:40.843956   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:40.917965   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:40.917992   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:40.918008   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:41.001880   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:41.001925   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:43.549007   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:43.563446   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:43.563502   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:43.598503   67607 cri.go:89] found id: ""
	I0829 20:29:43.598548   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.598557   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:43.598564   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:43.598614   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:43.634169   67607 cri.go:89] found id: ""
	I0829 20:29:43.634200   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.634210   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:43.634218   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:43.634280   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:43.670467   67607 cri.go:89] found id: ""
	I0829 20:29:43.670492   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.670500   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:43.670506   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:43.670580   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:43.706812   67607 cri.go:89] found id: ""
	I0829 20:29:43.706839   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.706849   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:43.706857   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:43.706922   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:43.741577   67607 cri.go:89] found id: ""
	I0829 20:29:43.741606   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.741612   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:43.741620   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:43.741700   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:43.776552   67607 cri.go:89] found id: ""
	I0829 20:29:43.776595   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.776625   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:43.776635   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:43.776701   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:43.816229   67607 cri.go:89] found id: ""
	I0829 20:29:43.816264   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.816274   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:43.816281   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:43.816346   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:40.850705   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:42.850904   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:40.709080   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:43.207700   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:41.994216   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:44.492986   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:46.494171   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:43.860726   67607 cri.go:89] found id: ""
	I0829 20:29:43.860753   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.860761   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:43.860768   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:43.860783   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:43.874311   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:43.874340   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:43.952243   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:43.952272   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:43.952288   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:44.032276   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:44.032312   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:44.075537   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:44.075571   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:46.632798   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:46.645878   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:46.645948   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:46.683682   67607 cri.go:89] found id: ""
	I0829 20:29:46.683711   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.683720   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:46.683726   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:46.683775   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:46.727985   67607 cri.go:89] found id: ""
	I0829 20:29:46.728012   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.728024   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:46.728031   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:46.728090   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:46.762142   67607 cri.go:89] found id: ""
	I0829 20:29:46.762166   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.762174   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:46.762180   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:46.762226   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:46.802423   67607 cri.go:89] found id: ""
	I0829 20:29:46.802453   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.802464   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:46.802471   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:46.802515   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:46.840382   67607 cri.go:89] found id: ""
	I0829 20:29:46.840411   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.840418   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:46.840425   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:46.840473   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:46.878438   67607 cri.go:89] found id: ""
	I0829 20:29:46.878466   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.878476   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:46.878483   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:46.878562   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:46.913589   67607 cri.go:89] found id: ""
	I0829 20:29:46.913618   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.913625   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:46.913631   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:46.913678   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:46.948894   67607 cri.go:89] found id: ""
	I0829 20:29:46.948922   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.948929   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:46.948938   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:46.948949   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:47.005709   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:47.005745   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:47.030316   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:47.030343   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:47.105899   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:47.105920   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:47.105932   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:47.189405   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:47.189442   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:45.352639   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:47.850647   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:45.709140   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:48.207411   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:48.994239   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:51.493287   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:49.727745   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:49.742061   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:49.742131   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:49.777428   67607 cri.go:89] found id: ""
	I0829 20:29:49.777456   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.777464   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:49.777471   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:49.777531   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:49.811611   67607 cri.go:89] found id: ""
	I0829 20:29:49.811639   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.811646   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:49.811653   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:49.811709   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:49.844962   67607 cri.go:89] found id: ""
	I0829 20:29:49.844987   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.844995   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:49.845006   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:49.845062   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:49.880259   67607 cri.go:89] found id: ""
	I0829 20:29:49.880286   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.880297   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:49.880305   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:49.880366   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:49.915889   67607 cri.go:89] found id: ""
	I0829 20:29:49.915918   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.915926   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:49.915932   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:49.915988   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:49.953146   67607 cri.go:89] found id: ""
	I0829 20:29:49.953174   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.953182   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:49.953189   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:49.953240   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:49.990689   67607 cri.go:89] found id: ""
	I0829 20:29:49.990721   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.990730   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:49.990738   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:49.990792   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:50.024775   67607 cri.go:89] found id: ""
	I0829 20:29:50.024806   67607 logs.go:276] 0 containers: []
	W0829 20:29:50.024817   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:50.024827   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:50.024842   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:50.079030   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:50.079064   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:50.093178   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:50.093205   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:50.171476   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:50.171499   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:50.171512   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:50.252913   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:50.252946   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:52.799818   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:52.812857   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:52.812930   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:52.850736   67607 cri.go:89] found id: ""
	I0829 20:29:52.850761   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.850770   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:52.850777   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:52.850834   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:52.888892   67607 cri.go:89] found id: ""
	I0829 20:29:52.888916   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.888923   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:52.888929   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:52.888975   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:52.925390   67607 cri.go:89] found id: ""
	I0829 20:29:52.925418   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.925428   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:52.925435   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:52.925501   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:52.960329   67607 cri.go:89] found id: ""
	I0829 20:29:52.960352   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.960360   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:52.960366   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:52.960413   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:52.994899   67607 cri.go:89] found id: ""
	I0829 20:29:52.994927   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.994935   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:52.994941   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:52.994995   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:53.033028   67607 cri.go:89] found id: ""
	I0829 20:29:53.033057   67607 logs.go:276] 0 containers: []
	W0829 20:29:53.033068   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:53.033076   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:53.033136   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:53.068353   67607 cri.go:89] found id: ""
	I0829 20:29:53.068381   67607 logs.go:276] 0 containers: []
	W0829 20:29:53.068389   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:53.068394   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:53.068441   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:53.104496   67607 cri.go:89] found id: ""
	I0829 20:29:53.104524   67607 logs.go:276] 0 containers: []
	W0829 20:29:53.104534   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:53.104545   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:53.104560   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:53.175777   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:53.175810   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:53.175827   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:53.257362   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:53.257396   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:53.295822   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:53.295850   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:53.351237   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:53.351263   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:49.851324   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:52.350768   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:50.707986   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:53.206918   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:53.494828   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:55.994443   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:55.864680   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:55.879324   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:55.879391   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:55.914454   67607 cri.go:89] found id: ""
	I0829 20:29:55.914479   67607 logs.go:276] 0 containers: []
	W0829 20:29:55.914490   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:55.914498   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:55.914592   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:55.953778   67607 cri.go:89] found id: ""
	I0829 20:29:55.953804   67607 logs.go:276] 0 containers: []
	W0829 20:29:55.953814   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:55.953821   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:55.953883   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:55.994659   67607 cri.go:89] found id: ""
	I0829 20:29:55.994681   67607 logs.go:276] 0 containers: []
	W0829 20:29:55.994689   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:55.994697   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:55.994768   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:56.031262   67607 cri.go:89] found id: ""
	I0829 20:29:56.031288   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.031299   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:56.031306   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:56.031366   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:56.063748   67607 cri.go:89] found id: ""
	I0829 20:29:56.063776   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.063785   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:56.063793   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:56.063883   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:56.098024   67607 cri.go:89] found id: ""
	I0829 20:29:56.098060   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.098068   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:56.098074   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:56.098127   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:56.141340   67607 cri.go:89] found id: ""
	I0829 20:29:56.141364   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.141374   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:56.141381   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:56.141440   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:56.176668   67607 cri.go:89] found id: ""
	I0829 20:29:56.176696   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.176707   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:56.176717   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:56.176731   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:56.216294   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:56.216322   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:56.269404   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:56.269440   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:56.283134   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:56.283160   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:56.355005   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:56.355023   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:56.355035   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:54.851658   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:57.350247   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:55.207477   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:57.708007   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:58.493689   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:00.998990   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:58.937406   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:58.950924   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:58.950981   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:58.986748   67607 cri.go:89] found id: ""
	I0829 20:29:58.986778   67607 logs.go:276] 0 containers: []
	W0829 20:29:58.986788   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:58.986795   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:58.986861   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:59.023737   67607 cri.go:89] found id: ""
	I0829 20:29:59.023763   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.023773   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:59.023780   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:59.023840   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:59.060245   67607 cri.go:89] found id: ""
	I0829 20:29:59.060274   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.060284   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:59.060291   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:59.060352   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:59.102467   67607 cri.go:89] found id: ""
	I0829 20:29:59.102493   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.102501   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:59.102507   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:59.102581   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:59.142601   67607 cri.go:89] found id: ""
	I0829 20:29:59.142625   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.142634   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:59.142647   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:59.142717   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:59.186683   67607 cri.go:89] found id: ""
	I0829 20:29:59.186707   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.186715   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:59.186723   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:59.186783   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:59.232104   67607 cri.go:89] found id: ""
	I0829 20:29:59.232136   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.232154   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:59.232162   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:59.232227   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:59.276416   67607 cri.go:89] found id: ""
	I0829 20:29:59.276442   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.276452   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:59.276462   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:59.276479   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:59.341741   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:59.341779   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:59.357312   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:59.357336   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:59.425653   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:59.425674   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:59.425689   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:59.505365   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:59.505403   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:02.049195   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:02.064558   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:02.064641   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:02.102141   67607 cri.go:89] found id: ""
	I0829 20:30:02.102188   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.102209   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:02.102217   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:02.102282   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:02.138610   67607 cri.go:89] found id: ""
	I0829 20:30:02.138640   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.138650   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:02.138658   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:02.138724   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:02.175391   67607 cri.go:89] found id: ""
	I0829 20:30:02.175423   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.175435   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:02.175442   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:02.175505   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:02.212956   67607 cri.go:89] found id: ""
	I0829 20:30:02.212981   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.212991   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:02.212998   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:02.213059   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:02.254444   67607 cri.go:89] found id: ""
	I0829 20:30:02.254467   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.254475   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:02.254481   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:02.254568   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:02.293232   67607 cri.go:89] found id: ""
	I0829 20:30:02.293260   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.293270   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:02.293277   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:02.293348   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:02.328300   67607 cri.go:89] found id: ""
	I0829 20:30:02.328329   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.328339   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:02.328346   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:02.328407   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:02.363467   67607 cri.go:89] found id: ""
	I0829 20:30:02.363495   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.363505   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:02.363514   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:02.363528   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:02.414357   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:02.414394   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:02.428229   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:02.428259   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:02.503640   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:02.503661   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:02.503674   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:02.584052   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:02.584087   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:59.352485   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:01.850334   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:59.717029   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:02.208354   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:03.494326   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:05.494833   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:05.124345   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:05.143530   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:05.143594   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:05.195985   67607 cri.go:89] found id: ""
	I0829 20:30:05.196014   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.196024   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:05.196032   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:05.196092   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:05.254315   67607 cri.go:89] found id: ""
	I0829 20:30:05.254343   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.254354   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:05.254362   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:05.254432   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:05.306756   67607 cri.go:89] found id: ""
	I0829 20:30:05.306781   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.306788   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:05.306794   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:05.306852   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:05.345200   67607 cri.go:89] found id: ""
	I0829 20:30:05.345225   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.345235   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:05.345242   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:05.345297   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:05.384038   67607 cri.go:89] found id: ""
	I0829 20:30:05.384064   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.384074   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:05.384081   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:05.384140   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:05.420177   67607 cri.go:89] found id: ""
	I0829 20:30:05.420201   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.420208   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:05.420214   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:05.420260   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:05.453492   67607 cri.go:89] found id: ""
	I0829 20:30:05.453513   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.453521   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:05.453526   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:05.453573   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:05.491591   67607 cri.go:89] found id: ""
	I0829 20:30:05.491618   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.491628   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:05.491638   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:05.491701   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:05.580458   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:05.580503   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:05.620137   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:05.620169   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:05.672137   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:05.672177   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:05.685946   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:05.685973   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:05.755176   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:08.256255   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:08.269099   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:08.269160   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:08.302552   67607 cri.go:89] found id: ""
	I0829 20:30:08.302578   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.302585   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:08.302591   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:08.302639   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:08.340683   67607 cri.go:89] found id: ""
	I0829 20:30:08.340711   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.340718   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:08.340726   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:08.340778   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:08.387389   67607 cri.go:89] found id: ""
	I0829 20:30:08.387416   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.387424   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:08.387430   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:08.387477   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:08.421303   67607 cri.go:89] found id: ""
	I0829 20:30:08.421330   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.421340   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:08.421348   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:08.421409   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:08.458648   67607 cri.go:89] found id: ""
	I0829 20:30:08.458677   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.458688   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:08.458695   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:08.458758   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:08.498748   67607 cri.go:89] found id: ""
	I0829 20:30:08.498776   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.498784   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:08.498790   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:08.498845   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:08.536859   67607 cri.go:89] found id: ""
	I0829 20:30:08.536889   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.536896   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:08.536902   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:08.536963   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:08.570685   67607 cri.go:89] found id: ""
	I0829 20:30:08.570713   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.570723   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:08.570734   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:08.570748   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:08.621904   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:08.621938   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:08.636367   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:08.636391   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:08.703796   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:08.703824   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:08.703838   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:08.785084   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:08.785120   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:04.350230   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:06.849598   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:08.850961   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:04.708012   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:07.206604   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:09.207368   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:07.993015   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:09.994043   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:11.326633   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:11.339570   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:11.339637   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:11.374132   67607 cri.go:89] found id: ""
	I0829 20:30:11.374155   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.374163   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:11.374169   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:11.374234   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:11.409004   67607 cri.go:89] found id: ""
	I0829 20:30:11.409036   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.409047   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:11.409054   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:11.409119   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:11.444598   67607 cri.go:89] found id: ""
	I0829 20:30:11.444625   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.444635   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:11.444643   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:11.444704   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:11.481912   67607 cri.go:89] found id: ""
	I0829 20:30:11.481942   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.481953   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:11.481961   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:11.482025   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:11.516436   67607 cri.go:89] found id: ""
	I0829 20:30:11.516466   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.516477   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:11.516483   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:11.516536   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:11.554762   67607 cri.go:89] found id: ""
	I0829 20:30:11.554787   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.554795   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:11.554801   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:11.554857   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:11.588902   67607 cri.go:89] found id: ""
	I0829 20:30:11.588931   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.588942   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:11.588950   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:11.589011   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:11.621346   67607 cri.go:89] found id: ""
	I0829 20:30:11.621368   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.621376   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:11.621383   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:11.621395   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:11.659671   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:11.659703   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:11.711288   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:11.711315   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:11.725285   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:11.725310   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:11.801713   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:11.801735   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:11.801750   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:10.851075   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:13.349510   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:11.208203   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:13.706599   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:12.494548   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:14.993188   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:14.382313   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:14.395852   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:14.395926   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:14.438735   67607 cri.go:89] found id: ""
	I0829 20:30:14.438762   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.438772   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:14.438778   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:14.438840   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:14.477886   67607 cri.go:89] found id: ""
	I0829 20:30:14.477928   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.477937   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:14.477943   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:14.478000   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:14.517627   67607 cri.go:89] found id: ""
	I0829 20:30:14.517654   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.517664   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:14.517670   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:14.517734   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:14.557247   67607 cri.go:89] found id: ""
	I0829 20:30:14.557272   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.557280   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:14.557286   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:14.557345   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:14.591364   67607 cri.go:89] found id: ""
	I0829 20:30:14.591388   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.591398   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:14.591406   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:14.591468   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:14.627517   67607 cri.go:89] found id: ""
	I0829 20:30:14.627539   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.627546   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:14.627551   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:14.627604   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:14.662388   67607 cri.go:89] found id: ""
	I0829 20:30:14.662409   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.662419   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:14.662432   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:14.662488   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:14.695277   67607 cri.go:89] found id: ""
	I0829 20:30:14.695307   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.695316   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:14.695324   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:14.695335   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:14.735824   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:14.735852   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:14.792607   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:14.792642   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:14.808881   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:14.808910   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:14.879804   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:14.879824   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:14.879837   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:17.459817   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:17.474813   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:17.474887   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:17.509885   67607 cri.go:89] found id: ""
	I0829 20:30:17.509913   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.509923   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:17.509930   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:17.509987   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:17.543931   67607 cri.go:89] found id: ""
	I0829 20:30:17.543959   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.543968   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:17.543973   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:17.544021   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:17.580944   67607 cri.go:89] found id: ""
	I0829 20:30:17.580972   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.580980   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:17.580986   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:17.581033   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:17.620061   67607 cri.go:89] found id: ""
	I0829 20:30:17.620088   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.620097   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:17.620103   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:17.620148   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:17.658675   67607 cri.go:89] found id: ""
	I0829 20:30:17.658706   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.658717   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:17.658724   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:17.658788   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:17.694424   67607 cri.go:89] found id: ""
	I0829 20:30:17.694453   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.694462   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:17.694467   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:17.694571   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:17.727425   67607 cri.go:89] found id: ""
	I0829 20:30:17.727450   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.727456   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:17.727462   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:17.727510   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:17.767915   67607 cri.go:89] found id: ""
	I0829 20:30:17.767946   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.767956   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:17.767965   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:17.767977   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:17.837556   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:17.837580   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:17.837593   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:17.921601   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:17.921638   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:17.960999   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:17.961026   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:18.013654   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:18.013691   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:15.351372   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:17.850896   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:16.206810   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:18.207702   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:16.993566   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:18.997786   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:21.493705   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:20.528244   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:20.542116   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:20.542190   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:20.578905   67607 cri.go:89] found id: ""
	I0829 20:30:20.578936   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.578947   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:20.578954   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:20.579003   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:20.613543   67607 cri.go:89] found id: ""
	I0829 20:30:20.613567   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.613574   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:20.613579   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:20.613627   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:20.649322   67607 cri.go:89] found id: ""
	I0829 20:30:20.649344   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.649352   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:20.649366   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:20.649429   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:20.684851   67607 cri.go:89] found id: ""
	I0829 20:30:20.684878   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.684886   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:20.684892   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:20.684950   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:20.722016   67607 cri.go:89] found id: ""
	I0829 20:30:20.722045   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.722054   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:20.722062   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:20.722125   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:20.757594   67607 cri.go:89] found id: ""
	I0829 20:30:20.757626   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.757637   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:20.757644   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:20.757707   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:20.793694   67607 cri.go:89] found id: ""
	I0829 20:30:20.793728   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.793738   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:20.793746   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:20.793812   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:20.829709   67607 cri.go:89] found id: ""
	I0829 20:30:20.829736   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.829747   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:20.829758   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:20.829782   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:20.888838   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:20.888888   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:20.903530   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:20.903556   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:20.972460   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:20.972488   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:20.972503   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:21.055556   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:21.055593   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:23.597355   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:23.611091   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:23.611162   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:23.649469   67607 cri.go:89] found id: ""
	I0829 20:30:23.649493   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.649501   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:23.649510   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:23.649562   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:23.684530   67607 cri.go:89] found id: ""
	I0829 20:30:23.684554   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.684561   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:23.684571   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:23.684625   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:23.720466   67607 cri.go:89] found id: ""
	I0829 20:30:23.720493   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.720503   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:23.720510   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:23.720563   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:23.755013   67607 cri.go:89] found id: ""
	I0829 20:30:23.755042   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.755053   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:23.755061   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:23.755127   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:23.795212   67607 cri.go:89] found id: ""
	I0829 20:30:23.795243   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.795254   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:23.795263   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:23.795320   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:20.349781   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:22.350157   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:20.707723   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:23.206214   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:23.994457   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:26.493771   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:23.832912   67607 cri.go:89] found id: ""
	I0829 20:30:23.832941   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.832951   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:23.832959   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:23.833015   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:23.869896   67607 cri.go:89] found id: ""
	I0829 20:30:23.869930   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.869939   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:23.869947   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:23.870011   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:23.908111   67607 cri.go:89] found id: ""
	I0829 20:30:23.908136   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.908145   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:23.908155   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:23.908170   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:23.988489   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:23.988510   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:23.988525   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:24.063246   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:24.063280   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:24.102943   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:24.102974   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:24.157255   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:24.157294   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:26.671966   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:26.684755   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:26.684830   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:26.721125   67607 cri.go:89] found id: ""
	I0829 20:30:26.721150   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.721158   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:26.721164   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:26.721219   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:26.756328   67607 cri.go:89] found id: ""
	I0829 20:30:26.756349   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.756356   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:26.756362   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:26.756420   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:26.791711   67607 cri.go:89] found id: ""
	I0829 20:30:26.791751   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.791763   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:26.791774   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:26.791857   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:26.827215   67607 cri.go:89] found id: ""
	I0829 20:30:26.827244   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.827254   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:26.827261   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:26.827321   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:26.863461   67607 cri.go:89] found id: ""
	I0829 20:30:26.863486   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.863497   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:26.863505   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:26.863569   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:26.900037   67607 cri.go:89] found id: ""
	I0829 20:30:26.900065   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.900075   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:26.900083   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:26.900139   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:26.937236   67607 cri.go:89] found id: ""
	I0829 20:30:26.937263   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.937274   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:26.937282   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:26.937340   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:26.970281   67607 cri.go:89] found id: ""
	I0829 20:30:26.970312   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.970322   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:26.970332   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:26.970345   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:27.041485   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:27.041511   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:27.041526   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:27.120774   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:27.120807   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:27.159656   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:27.159685   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:27.213322   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:27.213356   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:24.350464   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:26.351419   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:28.850079   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:25.207838   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:27.708107   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:28.993552   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:31.494259   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:29.729066   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:29.742044   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:29.742099   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:29.777426   67607 cri.go:89] found id: ""
	I0829 20:30:29.777454   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.777462   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:29.777468   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:29.777529   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:29.814353   67607 cri.go:89] found id: ""
	I0829 20:30:29.814381   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.814392   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:29.814401   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:29.814462   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:29.853754   67607 cri.go:89] found id: ""
	I0829 20:30:29.853783   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.853793   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:29.853801   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:29.853869   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:29.893966   67607 cri.go:89] found id: ""
	I0829 20:30:29.893991   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.893998   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:29.894003   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:29.894057   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:29.929452   67607 cri.go:89] found id: ""
	I0829 20:30:29.929483   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.929492   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:29.929502   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:29.929561   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:29.965880   67607 cri.go:89] found id: ""
	I0829 20:30:29.965906   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.965916   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:29.965924   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:29.965986   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:30.002192   67607 cri.go:89] found id: ""
	I0829 20:30:30.002226   67607 logs.go:276] 0 containers: []
	W0829 20:30:30.002237   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:30.002245   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:30.002320   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:30.037603   67607 cri.go:89] found id: ""
	I0829 20:30:30.037640   67607 logs.go:276] 0 containers: []
	W0829 20:30:30.037651   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:30.037662   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:30.037677   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:30.094128   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:30.094168   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:30.110667   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:30.110701   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:30.188355   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:30.188375   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:30.188388   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:30.270750   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:30.270785   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:32.809472   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:32.823099   67607 kubeadm.go:597] duration metric: took 4m3.15684598s to restartPrimaryControlPlane
	W0829 20:30:32.823188   67607 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 20:30:32.823224   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:30:33.322987   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:30:33.338134   67607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:30:33.348586   67607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:30:33.358672   67607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:30:33.358692   67607 kubeadm.go:157] found existing configuration files:
	
	I0829 20:30:33.358748   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:30:33.367955   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:30:33.368000   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:30:33.377565   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:30:33.386317   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:30:33.386377   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:30:33.396356   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:30:33.406228   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:30:33.406281   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:30:33.418323   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:30:33.427595   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:30:33.427657   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:30:33.437520   67607 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:30:33.511159   67607 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 20:30:33.511279   67607 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:30:33.669988   67607 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:30:33.670133   67607 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:30:33.670267   67607 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 20:30:33.859908   67607 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:30:30.850893   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:32.851574   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:30.207012   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:32.206405   66989 pod_ready.go:82] duration metric: took 4m0.005864609s for pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace to be "Ready" ...
	E0829 20:30:32.206426   66989 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0829 20:30:32.206433   66989 pod_ready.go:39] duration metric: took 4m5.570928284s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:30:32.206448   66989 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:30:32.206482   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:32.206528   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:32.260213   66989 cri.go:89] found id: "f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:32.260242   66989 cri.go:89] found id: ""
	I0829 20:30:32.260252   66989 logs.go:276] 1 containers: [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313]
	I0829 20:30:32.260314   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.265201   66989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:32.265276   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:32.307620   66989 cri.go:89] found id: "5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:32.307648   66989 cri.go:89] found id: ""
	I0829 20:30:32.307656   66989 logs.go:276] 1 containers: [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6]
	I0829 20:30:32.307701   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.312372   66989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:32.312430   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:32.350059   66989 cri.go:89] found id: "64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:32.350092   66989 cri.go:89] found id: ""
	I0829 20:30:32.350102   66989 logs.go:276] 1 containers: [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71]
	I0829 20:30:32.350158   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.354624   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:32.354681   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:32.393968   66989 cri.go:89] found id: "daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:32.393988   66989 cri.go:89] found id: ""
	I0829 20:30:32.393995   66989 logs.go:276] 1 containers: [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334]
	I0829 20:30:32.394039   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.398674   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:32.398745   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:32.433038   66989 cri.go:89] found id: "05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:32.433064   66989 cri.go:89] found id: ""
	I0829 20:30:32.433074   66989 logs.go:276] 1 containers: [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f]
	I0829 20:30:32.433118   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.436969   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:32.437028   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:32.472768   66989 cri.go:89] found id: "29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:32.472786   66989 cri.go:89] found id: ""
	I0829 20:30:32.472793   66989 logs.go:276] 1 containers: [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd]
	I0829 20:30:32.472842   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.477466   66989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:32.477536   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:32.514464   66989 cri.go:89] found id: ""
	I0829 20:30:32.514492   66989 logs.go:276] 0 containers: []
	W0829 20:30:32.514502   66989 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:32.514509   66989 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0829 20:30:32.514591   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0829 20:30:32.551429   66989 cri.go:89] found id: "668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:32.551452   66989 cri.go:89] found id: "585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:32.551456   66989 cri.go:89] found id: ""
	I0829 20:30:32.551463   66989 logs.go:276] 2 containers: [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523]
	I0829 20:30:32.551508   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.555697   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.559864   66989 logs.go:123] Gathering logs for kube-apiserver [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313] ...
	I0829 20:30:32.559883   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:32.609776   66989 logs.go:123] Gathering logs for coredns [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71] ...
	I0829 20:30:32.609803   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:32.648419   66989 logs.go:123] Gathering logs for kube-scheduler [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334] ...
	I0829 20:30:32.648446   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:32.685938   66989 logs.go:123] Gathering logs for storage-provisioner [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c] ...
	I0829 20:30:32.685969   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:32.728665   66989 logs.go:123] Gathering logs for container status ...
	I0829 20:30:32.728693   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:32.770030   66989 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:32.770068   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 20:30:32.907821   66989 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:32.907850   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:32.923119   66989 logs.go:123] Gathering logs for etcd [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6] ...
	I0829 20:30:32.923149   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:32.979819   66989 logs.go:123] Gathering logs for kube-proxy [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f] ...
	I0829 20:30:32.979853   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:33.020472   66989 logs.go:123] Gathering logs for kube-controller-manager [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd] ...
	I0829 20:30:33.020496   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:33.074802   66989 logs.go:123] Gathering logs for storage-provisioner [585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523] ...
	I0829 20:30:33.074838   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:33.112043   66989 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:33.112072   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:33.624274   66989 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:33.624316   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:33.861742   67607 out.go:235]   - Generating certificates and keys ...
	I0829 20:30:33.861849   67607 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:30:33.861946   67607 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:30:33.862075   67607 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:30:33.862174   67607 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:30:33.862276   67607 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:30:33.862366   67607 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:30:33.862467   67607 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:30:33.862573   67607 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:30:33.862794   67607 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:30:33.863226   67607 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:30:33.863323   67607 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:30:33.863417   67607 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:30:34.065914   67607 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:30:34.235581   67607 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:30:34.660452   67607 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:30:34.724718   67607 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:30:34.743897   67607 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:30:34.746263   67607 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:30:34.746369   67607 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:30:34.893824   67607 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:30:33.494825   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:35.994300   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:34.895805   67607 out.go:235]   - Booting up control plane ...
	I0829 20:30:34.895941   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:30:34.904294   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:30:34.915103   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:30:34.915744   67607 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:30:34.917923   67607 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 20:30:35.351975   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:37.352013   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:36.202184   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:36.218838   66989 api_server.go:72] duration metric: took 4m17.334186395s to wait for apiserver process to appear ...
	I0829 20:30:36.218870   66989 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:30:36.218910   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:36.218963   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:36.263205   66989 cri.go:89] found id: "f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:36.263233   66989 cri.go:89] found id: ""
	I0829 20:30:36.263243   66989 logs.go:276] 1 containers: [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313]
	I0829 20:30:36.263292   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.267466   66989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:36.267522   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:36.303894   66989 cri.go:89] found id: "5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:36.303930   66989 cri.go:89] found id: ""
	I0829 20:30:36.303938   66989 logs.go:276] 1 containers: [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6]
	I0829 20:30:36.303996   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.308089   66989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:36.308170   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:36.347320   66989 cri.go:89] found id: "64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:36.347392   66989 cri.go:89] found id: ""
	I0829 20:30:36.347414   66989 logs.go:276] 1 containers: [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71]
	I0829 20:30:36.347485   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.352121   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:36.352174   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:36.389760   66989 cri.go:89] found id: "daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:36.389784   66989 cri.go:89] found id: ""
	I0829 20:30:36.389793   66989 logs.go:276] 1 containers: [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334]
	I0829 20:30:36.389853   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.394860   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:36.394919   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:36.430562   66989 cri.go:89] found id: "05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:36.430587   66989 cri.go:89] found id: ""
	I0829 20:30:36.430597   66989 logs.go:276] 1 containers: [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f]
	I0829 20:30:36.430655   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.435151   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:36.435226   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:36.470714   66989 cri.go:89] found id: "29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:36.470742   66989 cri.go:89] found id: ""
	I0829 20:30:36.470750   66989 logs.go:276] 1 containers: [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd]
	I0829 20:30:36.470816   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.475382   66989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:36.475446   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:36.514853   66989 cri.go:89] found id: ""
	I0829 20:30:36.514888   66989 logs.go:276] 0 containers: []
	W0829 20:30:36.514898   66989 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:36.514910   66989 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0829 20:30:36.514971   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0829 20:30:36.548229   66989 cri.go:89] found id: "668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:36.548252   66989 cri.go:89] found id: "585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:36.548256   66989 cri.go:89] found id: ""
	I0829 20:30:36.548263   66989 logs.go:276] 2 containers: [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523]
	I0829 20:30:36.548314   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.552484   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.556661   66989 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:36.556681   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:36.622985   66989 logs.go:123] Gathering logs for etcd [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6] ...
	I0829 20:30:36.623019   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:36.678770   66989 logs.go:123] Gathering logs for kube-controller-manager [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd] ...
	I0829 20:30:36.678799   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:36.731822   66989 logs.go:123] Gathering logs for storage-provisioner [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c] ...
	I0829 20:30:36.731849   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:36.768451   66989 logs.go:123] Gathering logs for storage-provisioner [585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523] ...
	I0829 20:30:36.768482   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:36.803818   66989 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:36.803846   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:37.225805   66989 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:37.225849   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:37.245421   66989 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:37.245458   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 20:30:37.358238   66989 logs.go:123] Gathering logs for kube-apiserver [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313] ...
	I0829 20:30:37.358266   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:37.401876   66989 logs.go:123] Gathering logs for coredns [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71] ...
	I0829 20:30:37.401913   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:37.438189   66989 logs.go:123] Gathering logs for kube-scheduler [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334] ...
	I0829 20:30:37.438223   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:37.475404   66989 logs.go:123] Gathering logs for kube-proxy [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f] ...
	I0829 20:30:37.475433   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:37.511876   66989 logs.go:123] Gathering logs for container status ...
	I0829 20:30:37.511903   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:38.493604   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:40.494396   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:40.054097   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:30:40.058474   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0829 20:30:40.059830   66989 api_server.go:141] control plane version: v1.31.0
	I0829 20:30:40.059850   66989 api_server.go:131] duration metric: took 3.840972907s to wait for apiserver health ...
	I0829 20:30:40.059857   66989 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:30:40.059877   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:40.059924   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:40.101978   66989 cri.go:89] found id: "f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:40.102003   66989 cri.go:89] found id: ""
	I0829 20:30:40.102013   66989 logs.go:276] 1 containers: [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313]
	I0829 20:30:40.102073   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.107429   66989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:40.107496   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:40.145052   66989 cri.go:89] found id: "5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:40.145078   66989 cri.go:89] found id: ""
	I0829 20:30:40.145086   66989 logs.go:276] 1 containers: [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6]
	I0829 20:30:40.145133   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.149329   66989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:40.149394   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:40.187740   66989 cri.go:89] found id: "64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:40.187769   66989 cri.go:89] found id: ""
	I0829 20:30:40.187778   66989 logs.go:276] 1 containers: [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71]
	I0829 20:30:40.187838   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.192085   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:40.192156   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:40.231992   66989 cri.go:89] found id: "daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:40.232010   66989 cri.go:89] found id: ""
	I0829 20:30:40.232017   66989 logs.go:276] 1 containers: [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334]
	I0829 20:30:40.232060   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.236275   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:40.236333   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:40.279637   66989 cri.go:89] found id: "05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:40.279660   66989 cri.go:89] found id: ""
	I0829 20:30:40.279669   66989 logs.go:276] 1 containers: [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f]
	I0829 20:30:40.279727   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.288800   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:40.288876   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:40.341222   66989 cri.go:89] found id: "29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:40.341248   66989 cri.go:89] found id: ""
	I0829 20:30:40.341258   66989 logs.go:276] 1 containers: [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd]
	I0829 20:30:40.341322   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.346013   66989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:40.346088   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:40.383801   66989 cri.go:89] found id: ""
	I0829 20:30:40.383828   66989 logs.go:276] 0 containers: []
	W0829 20:30:40.383836   66989 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:40.383842   66989 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0829 20:30:40.383896   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0829 20:30:40.421847   66989 cri.go:89] found id: "668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:40.421874   66989 cri.go:89] found id: "585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:40.421879   66989 cri.go:89] found id: ""
	I0829 20:30:40.421889   66989 logs.go:276] 2 containers: [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523]
	I0829 20:30:40.421950   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.426229   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.429902   66989 logs.go:123] Gathering logs for storage-provisioner [585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523] ...
	I0829 20:30:40.429931   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:40.471015   66989 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:40.471039   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:40.831575   66989 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:40.831612   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:40.846195   66989 logs.go:123] Gathering logs for etcd [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6] ...
	I0829 20:30:40.846230   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:40.905469   66989 logs.go:123] Gathering logs for kube-scheduler [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334] ...
	I0829 20:30:40.905507   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:40.952303   66989 logs.go:123] Gathering logs for kube-proxy [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f] ...
	I0829 20:30:40.952337   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:41.001278   66989 logs.go:123] Gathering logs for kube-controller-manager [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd] ...
	I0829 20:30:41.001309   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:41.071045   66989 logs.go:123] Gathering logs for container status ...
	I0829 20:30:41.071089   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:41.120024   66989 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:41.120050   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:41.191412   66989 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:41.191445   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 20:30:41.321848   66989 logs.go:123] Gathering logs for kube-apiserver [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313] ...
	I0829 20:30:41.321874   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:41.370807   66989 logs.go:123] Gathering logs for coredns [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71] ...
	I0829 20:30:41.370833   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:41.405913   66989 logs.go:123] Gathering logs for storage-provisioner [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c] ...
	I0829 20:30:41.405939   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:43.948957   66989 system_pods.go:59] 8 kube-system pods found
	I0829 20:30:43.948987   66989 system_pods.go:61] "coredns-6f6b679f8f-dg6t6" [92e89b20-ebf4-4738-8ca7-9dc2a0e5653a] Running
	I0829 20:30:43.948992   66989 system_pods.go:61] "etcd-embed-certs-388383" [a688325a-9ed2-488d-a1a1-aa440e37fa9f] Running
	I0829 20:30:43.948996   66989 system_pods.go:61] "kube-apiserver-embed-certs-388383" [7a1b715b-87a3-44e0-868d-a3184f5b9f61] Running
	I0829 20:30:43.948999   66989 system_pods.go:61] "kube-controller-manager-embed-certs-388383" [9d942083-4d39-448c-8151-424ea9d5e6af] Running
	I0829 20:30:43.949003   66989 system_pods.go:61] "kube-proxy-fcxs4" [649b40c8-4f4b-40d1-8179-baf378d4c7d7] Running
	I0829 20:30:43.949006   66989 system_pods.go:61] "kube-scheduler-embed-certs-388383" [87b73013-dfad-411d-aaa9-f2c0e39fb920] Running
	I0829 20:30:43.949011   66989 system_pods.go:61] "metrics-server-6867b74b74-mx5jh" [99e21acd-b7b8-4e6f-8c75-c112206aed89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:30:43.949015   66989 system_pods.go:61] "storage-provisioner" [021ca156-b7a8-4647-8efe-db17968fd5a8] Running
	I0829 20:30:43.949022   66989 system_pods.go:74] duration metric: took 3.889159839s to wait for pod list to return data ...
	I0829 20:30:43.949028   66989 default_sa.go:34] waiting for default service account to be created ...
	I0829 20:30:43.951906   66989 default_sa.go:45] found service account: "default"
	I0829 20:30:43.951932   66989 default_sa.go:55] duration metric: took 2.897769ms for default service account to be created ...
	I0829 20:30:43.951943   66989 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 20:30:43.959246   66989 system_pods.go:86] 8 kube-system pods found
	I0829 20:30:43.959269   66989 system_pods.go:89] "coredns-6f6b679f8f-dg6t6" [92e89b20-ebf4-4738-8ca7-9dc2a0e5653a] Running
	I0829 20:30:43.959275   66989 system_pods.go:89] "etcd-embed-certs-388383" [a688325a-9ed2-488d-a1a1-aa440e37fa9f] Running
	I0829 20:30:43.959279   66989 system_pods.go:89] "kube-apiserver-embed-certs-388383" [7a1b715b-87a3-44e0-868d-a3184f5b9f61] Running
	I0829 20:30:43.959283   66989 system_pods.go:89] "kube-controller-manager-embed-certs-388383" [9d942083-4d39-448c-8151-424ea9d5e6af] Running
	I0829 20:30:43.959286   66989 system_pods.go:89] "kube-proxy-fcxs4" [649b40c8-4f4b-40d1-8179-baf378d4c7d7] Running
	I0829 20:30:43.959290   66989 system_pods.go:89] "kube-scheduler-embed-certs-388383" [87b73013-dfad-411d-aaa9-f2c0e39fb920] Running
	I0829 20:30:43.959296   66989 system_pods.go:89] "metrics-server-6867b74b74-mx5jh" [99e21acd-b7b8-4e6f-8c75-c112206aed89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:30:43.959302   66989 system_pods.go:89] "storage-provisioner" [021ca156-b7a8-4647-8efe-db17968fd5a8] Running
	I0829 20:30:43.959309   66989 system_pods.go:126] duration metric: took 7.361244ms to wait for k8s-apps to be running ...
	I0829 20:30:43.959318   66989 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 20:30:43.959356   66989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:30:43.976136   66989 system_svc.go:56] duration metric: took 16.811475ms WaitForService to wait for kubelet
	I0829 20:30:43.976167   66989 kubeadm.go:582] duration metric: took 4m25.091518378s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:30:43.976193   66989 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:30:43.979345   66989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:30:43.979376   66989 node_conditions.go:123] node cpu capacity is 2
	I0829 20:30:43.979386   66989 node_conditions.go:105] duration metric: took 3.187489ms to run NodePressure ...
	I0829 20:30:43.979396   66989 start.go:241] waiting for startup goroutines ...
	I0829 20:30:43.979402   66989 start.go:246] waiting for cluster config update ...
	I0829 20:30:43.979414   66989 start.go:255] writing updated cluster config ...
	I0829 20:30:43.979729   66989 ssh_runner.go:195] Run: rm -f paused
	I0829 20:30:44.028715   66989 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 20:30:44.030675   66989 out.go:177] * Done! kubectl is now configured to use "embed-certs-388383" cluster and "default" namespace by default
	I0829 20:30:39.850811   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:41.850941   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:42.993711   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:45.492729   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:44.351171   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:46.849842   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:48.851125   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:47.494031   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:49.993291   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:51.350926   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:53.850966   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:52.494604   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:54.994054   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:56.350237   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:58.856068   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:56.994483   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:59.494879   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:01.351293   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:03.850415   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:01.994470   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:04.493393   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:05.851663   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:08.350513   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:06.988349   68084 pod_ready.go:82] duration metric: took 4m0.000994859s for pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace to be "Ready" ...
	E0829 20:31:06.988378   68084 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 20:31:06.988396   68084 pod_ready.go:39] duration metric: took 4m13.5587561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:31:06.988421   68084 kubeadm.go:597] duration metric: took 4m20.63419422s to restartPrimaryControlPlane
	W0829 20:31:06.988470   68084 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 20:31:06.988492   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:31:10.350782   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:12.851120   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:14.919490   67607 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 20:31:14.920124   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:14.920395   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:15.350794   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:17.351675   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:19.920740   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:19.920993   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:19.858714   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:22.351208   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:24.851679   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:27.351087   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:33.177614   68084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.189095849s)
	I0829 20:31:33.177712   68084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:31:33.202840   68084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:31:33.220648   68084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:31:33.239458   68084 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:31:33.239479   68084 kubeadm.go:157] found existing configuration files:
	
	I0829 20:31:33.239519   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0829 20:31:33.257831   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:31:33.257900   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:31:33.272621   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0829 20:31:33.287906   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:31:33.287975   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:31:33.302931   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0829 20:31:33.312359   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:31:33.312411   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:31:33.322850   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0829 20:31:33.332224   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:31:33.332280   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:31:33.342072   68084 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:31:33.388790   68084 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 20:31:33.388844   68084 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:31:33.506108   68084 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:31:33.506263   68084 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:31:33.506403   68084 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 20:31:33.515467   68084 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:31:29.921355   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:29.921591   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:29.351212   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:31.351683   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:33.850337   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:33.517487   68084 out.go:235]   - Generating certificates and keys ...
	I0829 20:31:33.517590   68084 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:31:33.517697   68084 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:31:33.517809   68084 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:31:33.517907   68084 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:31:33.518009   68084 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:31:33.518086   68084 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:31:33.518174   68084 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:31:33.518266   68084 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:31:33.518379   68084 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:31:33.518495   68084 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:31:33.518567   68084 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:31:33.518656   68084 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:31:33.888310   68084 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:31:34.000803   68084 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 20:31:34.103016   68084 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:31:34.461677   68084 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:31:34.617814   68084 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:31:34.618316   68084 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:31:34.622440   68084 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:31:34.624324   68084 out.go:235]   - Booting up control plane ...
	I0829 20:31:34.624428   68084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:31:34.624527   68084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:31:34.624882   68084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:31:34.647388   68084 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:31:34.653776   68084 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:31:34.653864   68084 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:31:34.795338   68084 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 20:31:34.795463   68084 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 20:31:35.797126   68084 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001854627s
	I0829 20:31:35.797253   68084 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 20:31:35.852495   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:37.344608   66841 pod_ready.go:82] duration metric: took 4m0.000461851s for pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace to be "Ready" ...
	E0829 20:31:37.344637   66841 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 20:31:37.344661   66841 pod_ready.go:39] duration metric: took 4m13.033970527s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:31:37.344693   66841 kubeadm.go:597] duration metric: took 4m20.095743839s to restartPrimaryControlPlane
	W0829 20:31:37.344752   66841 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 20:31:37.344780   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:31:40.799092   68084 kubeadm.go:310] [api-check] The API server is healthy after 5.002121632s
	I0829 20:31:40.813865   68084 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 20:31:40.829677   68084 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 20:31:40.870324   68084 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 20:31:40.870598   68084 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-145096 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 20:31:40.889024   68084 kubeadm.go:310] [bootstrap-token] Using token: gy9sl5.6oyya9sd2gbep67e
	I0829 20:31:40.890947   68084 out.go:235]   - Configuring RBAC rules ...
	I0829 20:31:40.891083   68084 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 20:31:40.898748   68084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 20:31:40.912914   68084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 20:31:40.916739   68084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 20:31:40.923995   68084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 20:31:40.930447   68084 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 20:31:41.206632   68084 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 20:31:41.679673   68084 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 20:31:42.206707   68084 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 20:31:42.206733   68084 kubeadm.go:310] 
	I0829 20:31:42.206819   68084 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 20:31:42.206830   68084 kubeadm.go:310] 
	I0829 20:31:42.206974   68084 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 20:31:42.206996   68084 kubeadm.go:310] 
	I0829 20:31:42.207018   68084 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 20:31:42.207073   68084 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 20:31:42.207120   68084 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 20:31:42.207127   68084 kubeadm.go:310] 
	I0829 20:31:42.207189   68084 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 20:31:42.207196   68084 kubeadm.go:310] 
	I0829 20:31:42.207234   68084 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 20:31:42.207238   68084 kubeadm.go:310] 
	I0829 20:31:42.207285   68084 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 20:31:42.207382   68084 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 20:31:42.207473   68084 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 20:31:42.207484   68084 kubeadm.go:310] 
	I0829 20:31:42.207611   68084 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 20:31:42.207727   68084 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 20:31:42.207736   68084 kubeadm.go:310] 
	I0829 20:31:42.207854   68084 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token gy9sl5.6oyya9sd2gbep67e \
	I0829 20:31:42.207962   68084 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef \
	I0829 20:31:42.207983   68084 kubeadm.go:310] 	--control-plane 
	I0829 20:31:42.207986   68084 kubeadm.go:310] 
	I0829 20:31:42.208087   68084 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 20:31:42.208106   68084 kubeadm.go:310] 
	I0829 20:31:42.208214   68084 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token gy9sl5.6oyya9sd2gbep67e \
	I0829 20:31:42.208342   68084 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef 
	I0829 20:31:42.209248   68084 kubeadm.go:310] W0829 20:31:33.349141    2513 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 20:31:42.209595   68084 kubeadm.go:310] W0829 20:31:33.349919    2513 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 20:31:42.209769   68084 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:31:42.209803   68084 cni.go:84] Creating CNI manager for ""
	I0829 20:31:42.209817   68084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:31:42.211545   68084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:31:42.212889   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:31:42.223984   68084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:31:42.242703   68084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 20:31:42.242779   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-145096 minikube.k8s.io/updated_at=2024_08_29T20_31_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033 minikube.k8s.io/name=default-k8s-diff-port-145096 minikube.k8s.io/primary=true
	I0829 20:31:42.242779   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:42.448824   68084 ops.go:34] apiserver oom_adj: -16
	I0829 20:31:42.453004   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:42.953891   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:43.453922   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:43.953465   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:44.453647   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:44.954035   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:45.453660   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:45.953536   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:46.046900   68084 kubeadm.go:1113] duration metric: took 3.804195127s to wait for elevateKubeSystemPrivileges
	I0829 20:31:46.046927   68084 kubeadm.go:394] duration metric: took 4m59.74590678s to StartCluster
	I0829 20:31:46.046947   68084 settings.go:142] acquiring lock: {Name:mka4cd5ddff5796cd0ca11509c181178f4f73529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:31:46.047046   68084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:31:46.048617   68084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:31:46.048876   68084 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 20:31:46.048979   68084 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 20:31:46.049063   68084 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-145096"
	I0829 20:31:46.049090   68084 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-145096"
	I0829 20:31:46.049090   68084 config.go:182] Loaded profile config "default-k8s-diff-port-145096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:31:46.049099   68084 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-145096"
	I0829 20:31:46.049136   68084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-145096"
	W0829 20:31:46.049143   68084 addons.go:243] addon storage-provisioner should already be in state true
	I0829 20:31:46.049174   68084 host.go:66] Checking if "default-k8s-diff-port-145096" exists ...
	I0829 20:31:46.049104   68084 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-145096"
	I0829 20:31:46.049264   68084 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-145096"
	W0829 20:31:46.049280   68084 addons.go:243] addon metrics-server should already be in state true
	I0829 20:31:46.049335   68084 host.go:66] Checking if "default-k8s-diff-port-145096" exists ...
	I0829 20:31:46.049569   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.049574   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.049595   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.049599   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.049698   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.049722   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.050441   68084 out.go:177] * Verifying Kubernetes components...
	I0829 20:31:46.052039   68084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:31:46.065735   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39367
	I0829 20:31:46.065909   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32931
	I0829 20:31:46.066241   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.066344   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.066900   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.066918   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.067024   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.067045   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.067438   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.067481   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.067665   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:31:46.067902   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.067931   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.069157   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41005
	I0829 20:31:46.070637   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.070757   68084 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-145096"
	W0829 20:31:46.070771   68084 addons.go:243] addon default-storageclass should already be in state true
	I0829 20:31:46.070803   68084 host.go:66] Checking if "default-k8s-diff-port-145096" exists ...
	I0829 20:31:46.071118   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.071124   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.071132   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.071155   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.071510   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.072052   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.072095   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.085524   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39387
	I0829 20:31:46.085987   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.086553   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.086576   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.086966   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.087138   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:31:46.087202   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43235
	I0829 20:31:46.087621   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.088358   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.088381   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.088708   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.088806   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:31:46.089193   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.089363   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.090878   68084 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:31:46.091571   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42413
	I0829 20:31:46.092208   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.092291   68084 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:31:46.092316   68084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 20:31:46.092337   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:31:46.092660   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.092687   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.093044   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.093230   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:31:46.095184   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:31:46.096265   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.096792   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:31:46.096821   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.097088   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:31:46.097274   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:31:46.097433   68084 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 20:31:46.097448   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:31:46.097645   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:31:46.098681   68084 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 20:31:46.098697   68084 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 20:31:46.098715   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:31:46.101604   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.101993   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:31:46.102014   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.102328   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:31:46.102529   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:31:46.102687   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:31:46.102847   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:31:46.108154   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32805
	I0829 20:31:46.108627   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.109111   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.109129   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.109446   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.109675   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:31:46.111174   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:31:46.111440   68084 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 20:31:46.111452   68084 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 20:31:46.111469   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:31:46.114302   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.114805   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:31:46.114832   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.114921   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:31:46.115102   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:31:46.115256   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:31:46.115400   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:31:46.277748   68084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:31:46.297001   68084 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-145096" to be "Ready" ...
	I0829 20:31:46.317473   68084 node_ready.go:49] node "default-k8s-diff-port-145096" has status "Ready":"True"
	I0829 20:31:46.317498   68084 node_ready.go:38] duration metric: took 20.469679ms for node "default-k8s-diff-port-145096" to be "Ready" ...
	I0829 20:31:46.317509   68084 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:31:46.332180   68084 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:46.393588   68084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:31:46.399404   68084 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 20:31:46.399428   68084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 20:31:46.453014   68084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 20:31:46.460100   68084 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 20:31:46.460126   68084 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 20:31:46.541980   68084 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:31:46.542002   68084 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 20:31:46.607148   68084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:31:47.296344   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.296370   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.296445   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.296471   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.296678   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.296722   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.296744   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.296764   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.298376   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:47.298379   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.298404   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.298412   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:47.298420   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.298436   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.298453   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.298464   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.298700   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.298726   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:47.298729   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.318720   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.318745   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.319031   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:47.319053   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.319069   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.870171   68084 pod_ready.go:93] pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:31:47.870198   68084 pod_ready.go:82] duration metric: took 1.537994965s for pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:47.870208   68084 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:48.057308   68084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.450120563s)
	I0829 20:31:48.057358   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:48.057371   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:48.057667   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:48.057722   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:48.057734   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:48.057747   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:48.057759   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:48.057989   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:48.058005   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:48.058021   68084 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-145096"
	I0829 20:31:48.059886   68084 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0829 20:31:48.061124   68084 addons.go:510] duration metric: took 2.012141801s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0829 20:31:48.875874   68084 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:31:48.875897   68084 pod_ready.go:82] duration metric: took 1.005682325s for pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:48.875912   68084 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:48.879828   68084 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:31:48.879846   68084 pod_ready.go:82] duration metric: took 3.928263ms for pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:48.879863   68084 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:50.886764   68084 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:49.922318   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:49.922554   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:52.887708   68084 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:55.387571   68084 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:55.886194   68084 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:31:55.886217   68084 pod_ready.go:82] duration metric: took 7.006347256s for pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:55.886225   68084 pod_ready.go:39] duration metric: took 9.568704494s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:31:55.886238   68084 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:31:55.886286   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:31:55.901604   68084 api_server.go:72] duration metric: took 9.852691692s to wait for apiserver process to appear ...
	I0829 20:31:55.901628   68084 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:31:55.901643   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:31:55.905564   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 200:
	ok
	I0829 20:31:55.906387   68084 api_server.go:141] control plane version: v1.31.0
	I0829 20:31:55.906406   68084 api_server.go:131] duration metric: took 4.772472ms to wait for apiserver health ...
	I0829 20:31:55.906413   68084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:31:55.911423   68084 system_pods.go:59] 9 kube-system pods found
	I0829 20:31:55.911444   68084 system_pods.go:61] "coredns-6f6b679f8f-l25kd" [86947930-0d47-407a-b876-b482596fbe8f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:31:55.911451   68084 system_pods.go:61] "coredns-6f6b679f8f-lnm92" [a6caefe0-e883-4460-87de-25ee97191e1a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:31:55.911458   68084 system_pods.go:61] "etcd-default-k8s-diff-port-145096" [caba3f17-6544-4fe0-8dd3-0dd95e8df8ce] Running
	I0829 20:31:55.911465   68084 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-145096" [9b1ca00a-613b-414f-81e9-601d53d43207] Running
	I0829 20:31:55.911470   68084 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-145096" [e7145779-85cf-458d-9870-6fda4853d29d] Running
	I0829 20:31:55.911479   68084 system_pods.go:61] "kube-proxy-ptswc" [96c01414-e8e8-4731-824b-11d636285fb3] Running
	I0829 20:31:55.911488   68084 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-145096" [0d2cc607-72ac-4417-8a7c-196bf3ec90d7] Running
	I0829 20:31:55.911495   68084 system_pods.go:61] "metrics-server-6867b74b74-6sdqg" [2c9efadb-89bb-4aa6-b0f0-ddcb3e931674] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:31:55.911503   68084 system_pods.go:61] "storage-provisioner" [81531989-d045-44fb-b1a1-0817af27c804] Running
	I0829 20:31:55.911512   68084 system_pods.go:74] duration metric: took 5.092824ms to wait for pod list to return data ...
	I0829 20:31:55.911523   68084 default_sa.go:34] waiting for default service account to be created ...
	I0829 20:31:55.913794   68084 default_sa.go:45] found service account: "default"
	I0829 20:31:55.913820   68084 default_sa.go:55] duration metric: took 2.286925ms for default service account to be created ...
	I0829 20:31:55.913830   68084 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 20:31:55.919628   68084 system_pods.go:86] 9 kube-system pods found
	I0829 20:31:55.919666   68084 system_pods.go:89] "coredns-6f6b679f8f-l25kd" [86947930-0d47-407a-b876-b482596fbe8f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:31:55.919677   68084 system_pods.go:89] "coredns-6f6b679f8f-lnm92" [a6caefe0-e883-4460-87de-25ee97191e1a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:31:55.919686   68084 system_pods.go:89] "etcd-default-k8s-diff-port-145096" [caba3f17-6544-4fe0-8dd3-0dd95e8df8ce] Running
	I0829 20:31:55.919693   68084 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-145096" [9b1ca00a-613b-414f-81e9-601d53d43207] Running
	I0829 20:31:55.919699   68084 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-145096" [e7145779-85cf-458d-9870-6fda4853d29d] Running
	I0829 20:31:55.919704   68084 system_pods.go:89] "kube-proxy-ptswc" [96c01414-e8e8-4731-824b-11d636285fb3] Running
	I0829 20:31:55.919710   68084 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-145096" [0d2cc607-72ac-4417-8a7c-196bf3ec90d7] Running
	I0829 20:31:55.919718   68084 system_pods.go:89] "metrics-server-6867b74b74-6sdqg" [2c9efadb-89bb-4aa6-b0f0-ddcb3e931674] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:31:55.919725   68084 system_pods.go:89] "storage-provisioner" [81531989-d045-44fb-b1a1-0817af27c804] Running
	I0829 20:31:55.919734   68084 system_pods.go:126] duration metric: took 5.897752ms to wait for k8s-apps to be running ...
	I0829 20:31:55.919745   68084 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 20:31:55.919800   68084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:31:55.935429   68084 system_svc.go:56] duration metric: took 15.676316ms WaitForService to wait for kubelet
	I0829 20:31:55.935460   68084 kubeadm.go:582] duration metric: took 9.886551311s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:31:55.935483   68084 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:31:55.938444   68084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:31:55.938466   68084 node_conditions.go:123] node cpu capacity is 2
	I0829 20:31:55.938476   68084 node_conditions.go:105] duration metric: took 2.988434ms to run NodePressure ...
	I0829 20:31:55.938486   68084 start.go:241] waiting for startup goroutines ...
	I0829 20:31:55.938493   68084 start.go:246] waiting for cluster config update ...
	I0829 20:31:55.938503   68084 start.go:255] writing updated cluster config ...
	I0829 20:31:55.938834   68084 ssh_runner.go:195] Run: rm -f paused
	I0829 20:31:55.987879   68084 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 20:31:55.989766   68084 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-145096" cluster and "default" namespace by default
	I0829 20:32:03.506190   66841 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.161387814s)
	I0829 20:32:03.506268   66841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:32:03.530660   66841 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:32:03.550784   66841 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:32:03.565054   66841 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:32:03.565085   66841 kubeadm.go:157] found existing configuration files:
	
	I0829 20:32:03.565131   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:32:03.586492   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:32:03.586577   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:32:03.605061   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:32:03.617990   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:32:03.618054   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:32:03.635587   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:32:03.645495   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:32:03.645559   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:32:03.655081   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:32:03.664640   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:32:03.664703   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:32:03.674097   66841 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:32:03.721087   66841 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 20:32:03.721155   66841 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:32:03.839829   66841 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:32:03.839985   66841 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:32:03.840079   66841 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 20:32:03.849047   66841 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:32:03.850883   66841 out.go:235]   - Generating certificates and keys ...
	I0829 20:32:03.850970   66841 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:32:03.851045   66841 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:32:03.851129   66841 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:32:03.851222   66841 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:32:03.851292   66841 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:32:03.851340   66841 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:32:03.851399   66841 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:32:03.851450   66841 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:32:03.851515   66841 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:32:03.851620   66841 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:32:03.851687   66841 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:32:03.851755   66841 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:32:03.968189   66841 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:32:04.253016   66841 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 20:32:04.341190   66841 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:32:04.491607   66841 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:32:04.616753   66841 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:32:04.617354   66841 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:32:04.619961   66841 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:32:04.621690   66841 out.go:235]   - Booting up control plane ...
	I0829 20:32:04.621799   66841 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:32:04.621910   66841 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:32:04.622021   66841 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:32:04.643758   66841 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:32:04.650541   66841 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:32:04.650612   66841 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:32:04.786596   66841 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 20:32:04.786755   66841 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 20:32:05.788381   66841 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001614523s
	I0829 20:32:05.788512   66841 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 20:32:10.789752   66841 kubeadm.go:310] [api-check] The API server is healthy after 5.001571241s
	I0829 20:32:10.803237   66841 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 20:32:10.822640   66841 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 20:32:10.845744   66841 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 20:32:10.846050   66841 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-397724 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 20:32:10.856315   66841 kubeadm.go:310] [bootstrap-token] Using token: 3k2s43.7gy6mzkt91kkied7
	I0829 20:32:10.857834   66841 out.go:235]   - Configuring RBAC rules ...
	I0829 20:32:10.857947   66841 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 20:32:10.867339   66841 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 20:32:10.876522   66841 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 20:32:10.879786   66841 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 20:32:10.885043   66841 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 20:32:10.892077   66841 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 20:32:11.196796   66841 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 20:32:11.630072   66841 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 20:32:12.200197   66841 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 20:32:12.200232   66841 kubeadm.go:310] 
	I0829 20:32:12.200314   66841 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 20:32:12.200326   66841 kubeadm.go:310] 
	I0829 20:32:12.200406   66841 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 20:32:12.200416   66841 kubeadm.go:310] 
	I0829 20:32:12.200450   66841 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 20:32:12.200536   66841 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 20:32:12.200606   66841 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 20:32:12.200616   66841 kubeadm.go:310] 
	I0829 20:32:12.200687   66841 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 20:32:12.200700   66841 kubeadm.go:310] 
	I0829 20:32:12.200744   66841 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 20:32:12.200750   66841 kubeadm.go:310] 
	I0829 20:32:12.200793   66841 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 20:32:12.200861   66841 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 20:32:12.200918   66841 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 20:32:12.200924   66841 kubeadm.go:310] 
	I0829 20:32:12.201048   66841 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 20:32:12.201144   66841 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 20:32:12.201152   66841 kubeadm.go:310] 
	I0829 20:32:12.201255   66841 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3k2s43.7gy6mzkt91kkied7 \
	I0829 20:32:12.201373   66841 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef \
	I0829 20:32:12.201400   66841 kubeadm.go:310] 	--control-plane 
	I0829 20:32:12.201411   66841 kubeadm.go:310] 
	I0829 20:32:12.201487   66841 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 20:32:12.201495   66841 kubeadm.go:310] 
	I0829 20:32:12.201574   66841 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3k2s43.7gy6mzkt91kkied7 \
	I0829 20:32:12.201710   66841 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef 
	I0829 20:32:12.202900   66841 kubeadm.go:310] W0829 20:32:03.691334    3057 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 20:32:12.203223   66841 kubeadm.go:310] W0829 20:32:03.692151    3057 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 20:32:12.203339   66841 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:32:12.203366   66841 cni.go:84] Creating CNI manager for ""
	I0829 20:32:12.203381   66841 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:32:12.205733   66841 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:32:12.206905   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:32:12.218121   66841 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:32:12.237885   66841 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 20:32:12.237989   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:12.238006   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-397724 minikube.k8s.io/updated_at=2024_08_29T20_32_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033 minikube.k8s.io/name=no-preload-397724 minikube.k8s.io/primary=true
	I0829 20:32:12.282191   66841 ops.go:34] apiserver oom_adj: -16
	I0829 20:32:12.430006   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:12.930327   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:13.430210   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:13.930065   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:14.430163   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:14.930189   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:15.430677   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:15.930670   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:16.430943   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:16.549095   66841 kubeadm.go:1113] duration metric: took 4.311165714s to wait for elevateKubeSystemPrivileges
	I0829 20:32:16.549136   66841 kubeadm.go:394] duration metric: took 4m59.355577107s to StartCluster
	I0829 20:32:16.549156   66841 settings.go:142] acquiring lock: {Name:mka4cd5ddff5796cd0ca11509c181178f4f73529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:32:16.549229   66841 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:32:16.550926   66841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:32:16.551141   66841 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 20:32:16.551202   66841 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 20:32:16.551291   66841 addons.go:69] Setting storage-provisioner=true in profile "no-preload-397724"
	I0829 20:32:16.551315   66841 addons.go:69] Setting default-storageclass=true in profile "no-preload-397724"
	I0829 20:32:16.551329   66841 config.go:182] Loaded profile config "no-preload-397724": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:32:16.551340   66841 addons.go:69] Setting metrics-server=true in profile "no-preload-397724"
	I0829 20:32:16.551389   66841 addons.go:234] Setting addon metrics-server=true in "no-preload-397724"
	W0829 20:32:16.551404   66841 addons.go:243] addon metrics-server should already be in state true
	I0829 20:32:16.551442   66841 host.go:66] Checking if "no-preload-397724" exists ...
	I0829 20:32:16.551360   66841 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-397724"
	I0829 20:32:16.551324   66841 addons.go:234] Setting addon storage-provisioner=true in "no-preload-397724"
	W0829 20:32:16.551673   66841 addons.go:243] addon storage-provisioner should already be in state true
	I0829 20:32:16.551705   66841 host.go:66] Checking if "no-preload-397724" exists ...
	I0829 20:32:16.551872   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.551873   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.551908   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.551929   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.552036   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.552065   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.552634   66841 out.go:177] * Verifying Kubernetes components...
	I0829 20:32:16.553973   66841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:32:16.567797   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43335
	I0829 20:32:16.568321   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.568884   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.568910   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.569328   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.569941   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.569978   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.573055   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40673
	I0829 20:32:16.573642   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36399
	I0829 20:32:16.573770   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.574303   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.574321   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.574394   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.574913   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.574933   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.574935   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.575471   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.575511   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.575724   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.575950   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:32:16.579912   66841 addons.go:234] Setting addon default-storageclass=true in "no-preload-397724"
	W0829 20:32:16.579932   66841 addons.go:243] addon default-storageclass should already be in state true
	I0829 20:32:16.579960   66841 host.go:66] Checking if "no-preload-397724" exists ...
	I0829 20:32:16.580281   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.580298   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.591264   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42469
	I0829 20:32:16.591442   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42753
	I0829 20:32:16.591777   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.591827   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.592275   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.592289   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.592289   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.592307   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.592702   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.592726   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.592881   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:32:16.592882   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:32:16.594494   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:32:16.594956   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:32:16.596431   66841 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:32:16.596433   66841 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 20:32:16.597503   66841 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 20:32:16.597524   66841 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 20:32:16.597547   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:32:16.597607   66841 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:32:16.597625   66841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 20:32:16.597641   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:32:16.598780   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32841
	I0829 20:32:16.599272   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.599915   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.599937   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.601210   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.601613   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.601965   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.602159   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:32:16.602190   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.602328   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.602867   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.602998   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:32:16.603188   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:32:16.603234   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:32:16.603287   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.603434   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:32:16.603487   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:32:16.603691   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:32:16.603708   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:32:16.603857   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:32:16.603977   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:32:16.619336   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37683
	I0829 20:32:16.619806   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.620269   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.620286   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.620604   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.620818   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:32:16.622348   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:32:16.622563   66841 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 20:32:16.622580   66841 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 20:32:16.622597   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:32:16.625203   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.625542   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:32:16.625570   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.625746   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:32:16.625934   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:32:16.626094   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:32:16.626266   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:32:16.787525   66841 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:32:16.817674   66841 node_ready.go:35] waiting up to 6m0s for node "no-preload-397724" to be "Ready" ...
	I0829 20:32:16.833992   66841 node_ready.go:49] node "no-preload-397724" has status "Ready":"True"
	I0829 20:32:16.834030   66841 node_ready.go:38] duration metric: took 16.322874ms for node "no-preload-397724" to be "Ready" ...
	I0829 20:32:16.834042   66841 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:32:16.843147   66841 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-crgtj" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:16.902589   66841 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 20:32:16.902613   66841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 20:32:16.902859   66841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 20:32:16.903193   66841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:32:16.922497   66841 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 20:32:16.922518   66841 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 20:32:16.966207   66841 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:32:16.966240   66841 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 20:32:17.004882   66841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:32:17.204576   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.204613   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.204968   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.204987   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:17.204995   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.204994   66841 main.go:141] libmachine: (no-preload-397724) DBG | Closing plugin on server side
	I0829 20:32:17.205002   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.205261   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.205278   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:17.211789   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.211811   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.212074   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.212089   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:17.212119   66841 main.go:141] libmachine: (no-preload-397724) DBG | Closing plugin on server side
	I0829 20:32:17.902866   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.902897   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.903218   66841 main.go:141] libmachine: (no-preload-397724) DBG | Closing plugin on server side
	I0829 20:32:17.903266   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.903278   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:17.903286   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.903296   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.903556   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.903572   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:18.344211   66841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.33928059s)
	I0829 20:32:18.344259   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:18.344274   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:18.344571   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:18.344589   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:18.344611   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:18.344626   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:18.344948   66841 main.go:141] libmachine: (no-preload-397724) DBG | Closing plugin on server side
	I0829 20:32:18.344980   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:18.345010   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:18.345025   66841 addons.go:475] Verifying addon metrics-server=true in "no-preload-397724"
	I0829 20:32:18.346919   66841 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0829 20:32:18.348704   66841 addons.go:510] duration metric: took 1.797503952s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0829 20:32:18.850832   66841 pod_ready.go:93] pod "coredns-6f6b679f8f-crgtj" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:18.850853   66841 pod_ready.go:82] duration metric: took 2.007683093s for pod "coredns-6f6b679f8f-crgtj" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:18.850862   66841 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dw2r7" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.357679   66841 pod_ready.go:93] pod "coredns-6f6b679f8f-dw2r7" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.357702   66841 pod_ready.go:82] duration metric: took 1.506832539s for pod "coredns-6f6b679f8f-dw2r7" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.357710   66841 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.361830   66841 pod_ready.go:93] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.361854   66841 pod_ready.go:82] duration metric: took 4.136801ms for pod "etcd-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.361865   66841 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.365719   66841 pod_ready.go:93] pod "kube-apiserver-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.365733   66841 pod_ready.go:82] duration metric: took 3.861894ms for pod "kube-apiserver-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.365741   66841 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.369596   66841 pod_ready.go:93] pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.369611   66841 pod_ready.go:82] duration metric: took 3.864669ms for pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.369619   66841 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f4x4j" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.447788   66841 pod_ready.go:93] pod "kube-proxy-f4x4j" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.447812   66841 pod_ready.go:82] duration metric: took 78.187574ms for pod "kube-proxy-f4x4j" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.447823   66841 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:22.049084   66841 pod_ready.go:93] pod "kube-scheduler-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:22.049105   66841 pod_ready.go:82] duration metric: took 1.601276793s for pod "kube-scheduler-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:22.049113   66841 pod_ready.go:39] duration metric: took 5.215058301s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:32:22.049125   66841 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:32:22.049172   66841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:32:22.066060   66841 api_server.go:72] duration metric: took 5.514888299s to wait for apiserver process to appear ...
	I0829 20:32:22.066086   66841 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:32:22.066109   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:32:22.072343   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 200:
	ok
	I0829 20:32:22.073798   66841 api_server.go:141] control plane version: v1.31.0
	I0829 20:32:22.073821   66841 api_server.go:131] duration metric: took 7.728095ms to wait for apiserver health ...
	I0829 20:32:22.073828   66841 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:32:22.252273   66841 system_pods.go:59] 9 kube-system pods found
	I0829 20:32:22.252302   66841 system_pods.go:61] "coredns-6f6b679f8f-crgtj" [c48571a8-18ae-4737-a05b-4a77736aee35] Running
	I0829 20:32:22.252309   66841 system_pods.go:61] "coredns-6f6b679f8f-dw2r7" [6edda799-e2d6-402b-b4cd-7e54b2b89ca5] Running
	I0829 20:32:22.252315   66841 system_pods.go:61] "etcd-no-preload-397724" [15473208-a76c-4bc5-810f-e78d59538493] Running
	I0829 20:32:22.252320   66841 system_pods.go:61] "kube-apiserver-no-preload-397724" [521c6041-888f-4145-aabb-54da7382953d] Running
	I0829 20:32:22.252325   66841 system_pods.go:61] "kube-controller-manager-no-preload-397724" [fd5afaf8-898d-4985-8efc-5628709a52cd] Running
	I0829 20:32:22.252329   66841 system_pods.go:61] "kube-proxy-f4x4j" [eb76dc5a-016a-416c-8880-f76fc2d2a9bb] Running
	I0829 20:32:22.252333   66841 system_pods.go:61] "kube-scheduler-no-preload-397724" [77d9e2de-ee8e-4cb2-a7f0-5d9b96bd9691] Running
	I0829 20:32:22.252342   66841 system_pods.go:61] "metrics-server-6867b74b74-nxdc5" [6061e81d-2f14-4c4a-9e0f-acb57dc9fb5a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:32:22.252348   66841 system_pods.go:61] "storage-provisioner" [8b6c02d6-7a39-4fea-80b4-4ba02904232c] Running
	I0829 20:32:22.252358   66841 system_pods.go:74] duration metric: took 178.523887ms to wait for pod list to return data ...
	I0829 20:32:22.252370   66841 default_sa.go:34] waiting for default service account to be created ...
	I0829 20:32:22.448475   66841 default_sa.go:45] found service account: "default"
	I0829 20:32:22.448499   66841 default_sa.go:55] duration metric: took 196.123693ms for default service account to be created ...
	I0829 20:32:22.448508   66841 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 20:32:22.650996   66841 system_pods.go:86] 9 kube-system pods found
	I0829 20:32:22.651023   66841 system_pods.go:89] "coredns-6f6b679f8f-crgtj" [c48571a8-18ae-4737-a05b-4a77736aee35] Running
	I0829 20:32:22.651029   66841 system_pods.go:89] "coredns-6f6b679f8f-dw2r7" [6edda799-e2d6-402b-b4cd-7e54b2b89ca5] Running
	I0829 20:32:22.651033   66841 system_pods.go:89] "etcd-no-preload-397724" [15473208-a76c-4bc5-810f-e78d59538493] Running
	I0829 20:32:22.651037   66841 system_pods.go:89] "kube-apiserver-no-preload-397724" [521c6041-888f-4145-aabb-54da7382953d] Running
	I0829 20:32:22.651042   66841 system_pods.go:89] "kube-controller-manager-no-preload-397724" [fd5afaf8-898d-4985-8efc-5628709a52cd] Running
	I0829 20:32:22.651045   66841 system_pods.go:89] "kube-proxy-f4x4j" [eb76dc5a-016a-416c-8880-f76fc2d2a9bb] Running
	I0829 20:32:22.651048   66841 system_pods.go:89] "kube-scheduler-no-preload-397724" [77d9e2de-ee8e-4cb2-a7f0-5d9b96bd9691] Running
	I0829 20:32:22.651054   66841 system_pods.go:89] "metrics-server-6867b74b74-nxdc5" [6061e81d-2f14-4c4a-9e0f-acb57dc9fb5a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:32:22.651058   66841 system_pods.go:89] "storage-provisioner" [8b6c02d6-7a39-4fea-80b4-4ba02904232c] Running
	I0829 20:32:22.651065   66841 system_pods.go:126] duration metric: took 202.552304ms to wait for k8s-apps to be running ...
	I0829 20:32:22.651071   66841 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 20:32:22.651111   66841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:32:22.666831   66841 system_svc.go:56] duration metric: took 15.753046ms WaitForService to wait for kubelet
	I0829 20:32:22.666863   66841 kubeadm.go:582] duration metric: took 6.115692499s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:32:22.666888   66841 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:32:22.848742   66841 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:32:22.848766   66841 node_conditions.go:123] node cpu capacity is 2
	I0829 20:32:22.848777   66841 node_conditions.go:105] duration metric: took 181.884368ms to run NodePressure ...
	I0829 20:32:22.848787   66841 start.go:241] waiting for startup goroutines ...
	I0829 20:32:22.848794   66841 start.go:246] waiting for cluster config update ...
	I0829 20:32:22.848803   66841 start.go:255] writing updated cluster config ...
	I0829 20:32:22.849030   66841 ssh_runner.go:195] Run: rm -f paused
	I0829 20:32:22.897503   66841 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 20:32:22.899404   66841 out.go:177] * Done! kubectl is now configured to use "no-preload-397724" cluster and "default" namespace by default
	I0829 20:32:29.924469   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:32:29.924707   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:32:29.924729   67607 kubeadm.go:310] 
	I0829 20:32:29.924801   67607 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 20:32:29.924855   67607 kubeadm.go:310] 		timed out waiting for the condition
	I0829 20:32:29.924865   67607 kubeadm.go:310] 
	I0829 20:32:29.924912   67607 kubeadm.go:310] 	This error is likely caused by:
	I0829 20:32:29.924960   67607 kubeadm.go:310] 		- The kubelet is not running
	I0829 20:32:29.925080   67607 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 20:32:29.925090   67607 kubeadm.go:310] 
	I0829 20:32:29.925207   67607 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 20:32:29.925256   67607 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 20:32:29.925316   67607 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 20:32:29.925342   67607 kubeadm.go:310] 
	I0829 20:32:29.925493   67607 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 20:32:29.925616   67607 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 20:32:29.925627   67607 kubeadm.go:310] 
	I0829 20:32:29.925776   67607 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 20:32:29.925909   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 20:32:29.926016   67607 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 20:32:29.926134   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 20:32:29.926154   67607 kubeadm.go:310] 
	I0829 20:32:29.926605   67607 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:32:29.926723   67607 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 20:32:29.926812   67607 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0829 20:32:29.926935   67607 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0829 20:32:29.926979   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:32:30.389951   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:32:30.408455   67607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:32:30.418493   67607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:32:30.418513   67607 kubeadm.go:157] found existing configuration files:
	
	I0829 20:32:30.418582   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:32:30.427909   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:32:30.427957   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:32:30.437122   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:32:30.446157   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:32:30.446203   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:32:30.455480   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:32:30.464781   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:32:30.464834   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:32:30.474607   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:32:30.484537   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:32:30.484601   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:32:30.494170   67607 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:32:30.717349   67607 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:34:26.784436   67607 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 20:34:26.784518   67607 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0829 20:34:26.786158   67607 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 20:34:26.786196   67607 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:34:26.786276   67607 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:34:26.786353   67607 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:34:26.786437   67607 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 20:34:26.786486   67607 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:34:26.788271   67607 out.go:235]   - Generating certificates and keys ...
	I0829 20:34:26.788380   67607 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:34:26.788453   67607 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:34:26.788523   67607 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:34:26.788593   67607 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:34:26.788665   67607 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:34:26.788714   67607 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:34:26.788769   67607 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:34:26.788826   67607 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:34:26.788894   67607 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:34:26.788961   67607 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:34:26.788993   67607 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:34:26.789044   67607 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:34:26.789084   67607 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:34:26.789143   67607 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:34:26.789228   67607 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:34:26.789312   67607 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:34:26.789441   67607 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:34:26.789577   67607 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:34:26.789647   67607 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:34:26.789717   67607 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:34:26.791166   67607 out.go:235]   - Booting up control plane ...
	I0829 20:34:26.791239   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:34:26.791305   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:34:26.791382   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:34:26.791465   67607 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:34:26.791597   67607 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 20:34:26.791658   67607 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 20:34:26.791736   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.791926   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792008   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.792182   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792254   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.792435   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792492   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.792725   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792798   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.793026   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.793043   67607 kubeadm.go:310] 
	I0829 20:34:26.793091   67607 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 20:34:26.793148   67607 kubeadm.go:310] 		timed out waiting for the condition
	I0829 20:34:26.793159   67607 kubeadm.go:310] 
	I0829 20:34:26.793188   67607 kubeadm.go:310] 	This error is likely caused by:
	I0829 20:34:26.793219   67607 kubeadm.go:310] 		- The kubelet is not running
	I0829 20:34:26.793305   67607 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 20:34:26.793314   67607 kubeadm.go:310] 
	I0829 20:34:26.793438   67607 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 20:34:26.793483   67607 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 20:34:26.793515   67607 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 20:34:26.793522   67607 kubeadm.go:310] 
	I0829 20:34:26.793618   67607 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 20:34:26.793735   67607 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 20:34:26.793748   67607 kubeadm.go:310] 
	I0829 20:34:26.793895   67607 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 20:34:26.794020   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 20:34:26.794125   67607 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 20:34:26.794227   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 20:34:26.794285   67607 kubeadm.go:310] 
	I0829 20:34:26.794300   67607 kubeadm.go:394] duration metric: took 7m57.183485424s to StartCluster
	I0829 20:34:26.794357   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:34:26.794410   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:34:26.837033   67607 cri.go:89] found id: ""
	I0829 20:34:26.837072   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.837083   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:34:26.837091   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:34:26.837153   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:34:26.871177   67607 cri.go:89] found id: ""
	I0829 20:34:26.871203   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.871213   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:34:26.871220   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:34:26.871280   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:34:26.905409   67607 cri.go:89] found id: ""
	I0829 20:34:26.905432   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.905442   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:34:26.905450   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:34:26.905509   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:34:26.940119   67607 cri.go:89] found id: ""
	I0829 20:34:26.940150   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.940161   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:34:26.940169   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:34:26.940217   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:34:26.974555   67607 cri.go:89] found id: ""
	I0829 20:34:26.974589   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.974601   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:34:26.974608   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:34:26.974674   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:34:27.010586   67607 cri.go:89] found id: ""
	I0829 20:34:27.010616   67607 logs.go:276] 0 containers: []
	W0829 20:34:27.010631   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:34:27.010639   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:34:27.010704   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:34:27.044867   67607 cri.go:89] found id: ""
	I0829 20:34:27.044900   67607 logs.go:276] 0 containers: []
	W0829 20:34:27.044913   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:34:27.044921   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:34:27.044979   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:34:27.079282   67607 cri.go:89] found id: ""
	I0829 20:34:27.079308   67607 logs.go:276] 0 containers: []
	W0829 20:34:27.079316   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:34:27.079323   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:34:27.079335   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:34:27.093455   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:34:27.093485   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:34:27.179256   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:34:27.179280   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:34:27.179292   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:34:27.305873   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:34:27.305906   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:34:27.349676   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:34:27.349702   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 20:34:27.399787   67607 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0829 20:34:27.399851   67607 out.go:270] * 
	W0829 20:34:27.399907   67607 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 20:34:27.399919   67607 out.go:270] * 
	W0829 20:34:27.400631   67607 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 20:34:27.403773   67607 out.go:201] 
	W0829 20:34:27.404902   67607 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 20:34:27.404953   67607 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0829 20:34:27.404981   67607 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0829 20:34:27.406310   67607 out.go:201] 
	
	
	==> CRI-O <==
	Aug 29 20:43:32 old-k8s-version-032002 crio[630]: time="2024-08-29 20:43:32.469024393Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964212468996608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=90f94cca-dc94-4c75-a864-8eade4325f57 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:43:32 old-k8s-version-032002 crio[630]: time="2024-08-29 20:43:32.469629753Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b697cd80-a6af-4002-bb20-11d304236c38 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:43:32 old-k8s-version-032002 crio[630]: time="2024-08-29 20:43:32.469690964Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b697cd80-a6af-4002-bb20-11d304236c38 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:43:32 old-k8s-version-032002 crio[630]: time="2024-08-29 20:43:32.469722281Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b697cd80-a6af-4002-bb20-11d304236c38 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:43:32 old-k8s-version-032002 crio[630]: time="2024-08-29 20:43:32.506348652Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8618a1a8-cf2c-4c0d-acbe-7c2ad108d250 name=/runtime.v1.RuntimeService/Version
	Aug 29 20:43:32 old-k8s-version-032002 crio[630]: time="2024-08-29 20:43:32.506463040Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8618a1a8-cf2c-4c0d-acbe-7c2ad108d250 name=/runtime.v1.RuntimeService/Version
	Aug 29 20:43:32 old-k8s-version-032002 crio[630]: time="2024-08-29 20:43:32.508792792Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=237acb61-7846-403c-a5ae-341142b12263 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:43:32 old-k8s-version-032002 crio[630]: time="2024-08-29 20:43:32.509379370Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964212509346601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=237acb61-7846-403c-a5ae-341142b12263 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:43:32 old-k8s-version-032002 crio[630]: time="2024-08-29 20:43:32.510512111Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cebb6935-e36c-442e-9d8b-684aa3c99eac name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:43:32 old-k8s-version-032002 crio[630]: time="2024-08-29 20:43:32.510577431Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cebb6935-e36c-442e-9d8b-684aa3c99eac name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:43:32 old-k8s-version-032002 crio[630]: time="2024-08-29 20:43:32.510636628Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cebb6935-e36c-442e-9d8b-684aa3c99eac name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:43:32 old-k8s-version-032002 crio[630]: time="2024-08-29 20:43:32.544322486Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd665cc3-5acf-4822-afbe-408a9734fae4 name=/runtime.v1.RuntimeService/Version
	Aug 29 20:43:32 old-k8s-version-032002 crio[630]: time="2024-08-29 20:43:32.544398025Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd665cc3-5acf-4822-afbe-408a9734fae4 name=/runtime.v1.RuntimeService/Version
	Aug 29 20:43:32 old-k8s-version-032002 crio[630]: time="2024-08-29 20:43:32.545917832Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24cc926f-6ed7-4b2b-aa99-0a6a6d428c59 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:43:32 old-k8s-version-032002 crio[630]: time="2024-08-29 20:43:32.546295611Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964212546275331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24cc926f-6ed7-4b2b-aa99-0a6a6d428c59 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:43:32 old-k8s-version-032002 crio[630]: time="2024-08-29 20:43:32.547012611Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=feb5a0da-aada-4e90-bad3-e98e234eeafd name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:43:32 old-k8s-version-032002 crio[630]: time="2024-08-29 20:43:32.547098979Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=feb5a0da-aada-4e90-bad3-e98e234eeafd name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:43:32 old-k8s-version-032002 crio[630]: time="2024-08-29 20:43:32.547141004Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=feb5a0da-aada-4e90-bad3-e98e234eeafd name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:43:32 old-k8s-version-032002 crio[630]: time="2024-08-29 20:43:32.580600968Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=83476c19-43ea-4c2d-8c8c-9c8ffae7f119 name=/runtime.v1.RuntimeService/Version
	Aug 29 20:43:32 old-k8s-version-032002 crio[630]: time="2024-08-29 20:43:32.580674047Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=83476c19-43ea-4c2d-8c8c-9c8ffae7f119 name=/runtime.v1.RuntimeService/Version
	Aug 29 20:43:32 old-k8s-version-032002 crio[630]: time="2024-08-29 20:43:32.581892843Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4e125b0b-044d-4b31-85f5-618712d18e1b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:43:32 old-k8s-version-032002 crio[630]: time="2024-08-29 20:43:32.582287421Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964212582256643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4e125b0b-044d-4b31-85f5-618712d18e1b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:43:32 old-k8s-version-032002 crio[630]: time="2024-08-29 20:43:32.582905097Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00085844-ff41-459a-9509-de505399d9b9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:43:32 old-k8s-version-032002 crio[630]: time="2024-08-29 20:43:32.582959648Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00085844-ff41-459a-9509-de505399d9b9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:43:32 old-k8s-version-032002 crio[630]: time="2024-08-29 20:43:32.582989752Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=00085844-ff41-459a-9509-de505399d9b9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug29 20:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053894] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042317] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.920296] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.442854] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.576675] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.694150] systemd-fstab-generator[556]: Ignoring "noauto" option for root device
	[  +0.062526] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052165] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.177300] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.162237] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.253464] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +6.389299] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.063933] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.901932] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[ +13.592201] kauditd_printk_skb: 46 callbacks suppressed
	[Aug29 20:30] systemd-fstab-generator[5044]: Ignoring "noauto" option for root device
	[Aug29 20:32] systemd-fstab-generator[5320]: Ignoring "noauto" option for root device
	[  +0.064706] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:43:32 up 17 min,  0 users,  load average: 0.00, 0.02, 0.04
	Linux old-k8s-version-032002 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 29 20:43:27 old-k8s-version-032002 kubelet[6492]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Aug 29 20:43:27 old-k8s-version-032002 kubelet[6492]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000a3d320, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc0009fcd50, 0x24, 0x0, ...)
	Aug 29 20:43:27 old-k8s-version-032002 kubelet[6492]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Aug 29 20:43:27 old-k8s-version-032002 kubelet[6492]: net.(*Dialer).DialContext(0xc000130ba0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0009fcd50, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 29 20:43:27 old-k8s-version-032002 kubelet[6492]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Aug 29 20:43:27 old-k8s-version-032002 kubelet[6492]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0007801c0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0009fcd50, 0x24, 0x60, 0x7fb844667b48, 0x118, ...)
	Aug 29 20:43:27 old-k8s-version-032002 kubelet[6492]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Aug 29 20:43:27 old-k8s-version-032002 kubelet[6492]: net/http.(*Transport).dial(0xc0001297c0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0009fcd50, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 29 20:43:27 old-k8s-version-032002 kubelet[6492]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Aug 29 20:43:27 old-k8s-version-032002 kubelet[6492]: net/http.(*Transport).dialConn(0xc0001297c0, 0x4f7fe00, 0xc000052030, 0x0, 0xc0003c26c0, 0x5, 0xc0009fcd50, 0x24, 0x0, 0xc0008bf8c0, ...)
	Aug 29 20:43:27 old-k8s-version-032002 kubelet[6492]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Aug 29 20:43:27 old-k8s-version-032002 kubelet[6492]: net/http.(*Transport).dialConnFor(0xc0001297c0, 0xc000989ce0)
	Aug 29 20:43:27 old-k8s-version-032002 kubelet[6492]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Aug 29 20:43:27 old-k8s-version-032002 kubelet[6492]: created by net/http.(*Transport).queueForDial
	Aug 29 20:43:27 old-k8s-version-032002 kubelet[6492]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Aug 29 20:43:27 old-k8s-version-032002 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 29 20:43:27 old-k8s-version-032002 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 29 20:43:27 old-k8s-version-032002 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Aug 29 20:43:27 old-k8s-version-032002 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 29 20:43:27 old-k8s-version-032002 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 29 20:43:27 old-k8s-version-032002 kubelet[6501]: I0829 20:43:27.990007    6501 server.go:416] Version: v1.20.0
	Aug 29 20:43:27 old-k8s-version-032002 kubelet[6501]: I0829 20:43:27.990449    6501 server.go:837] Client rotation is on, will bootstrap in background
	Aug 29 20:43:27 old-k8s-version-032002 kubelet[6501]: I0829 20:43:27.993342    6501 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 29 20:43:27 old-k8s-version-032002 kubelet[6501]: W0829 20:43:27.994498    6501 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 29 20:43:27 old-k8s-version-032002 kubelet[6501]: I0829 20:43:27.994528    6501 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-032002 -n old-k8s-version-032002
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-032002 -n old-k8s-version-032002: exit status 2 (237.388113ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-032002" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (386.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-388383 -n embed-certs-388383
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-29 20:46:12.984420667 +0000 UTC m=+6637.154881974
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-388383 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-388383 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.695µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-388383 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-388383 -n embed-certs-388383
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-388383 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-388383 logs -n 25: (1.527040767s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p embed-certs-388383                                  | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-695305             | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:19 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-695305                  | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-695305 --memory=2200 --alsologtostderr   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:20 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-695305 image list                           | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	| delete  | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	| start   | -p                                                     | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:21 UTC |
	|         | default-k8s-diff-port-145096                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-032002        | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-397724                  | no-preload-397724            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-397724                                   | no-preload-397724            | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC | 29 Aug 24 20:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-388383                 | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-388383                                  | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC | 29 Aug 24 20:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-145096  | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC | 29 Aug 24 20:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC |                     |
	|         | default-k8s-diff-port-145096                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-032002                              | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:22 UTC | 29 Aug 24 20:22 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-032002             | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:22 UTC | 29 Aug 24 20:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-032002                              | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:22 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-145096       | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:24 UTC | 29 Aug 24 20:31 UTC |
	|         | default-k8s-diff-port-145096                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-032002                              | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:46 UTC | 29 Aug 24 20:46 UTC |
	| start   | -p auto-801672 --memory=3072                           | auto-801672                  | jenkins | v1.33.1 | 29 Aug 24 20:46 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 20:46:13
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 20:46:13.222134   74174 out.go:345] Setting OutFile to fd 1 ...
	I0829 20:46:13.222431   74174 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:46:13.222439   74174 out.go:358] Setting ErrFile to fd 2...
	I0829 20:46:13.222446   74174 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:46:13.222769   74174 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 20:46:13.223435   74174 out.go:352] Setting JSON to false
	I0829 20:46:13.224570   74174 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8920,"bootTime":1724955453,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 20:46:13.224638   74174 start.go:139] virtualization: kvm guest
	I0829 20:46:13.226887   74174 out.go:177] * [auto-801672] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 20:46:13.228398   74174 out.go:177]   - MINIKUBE_LOCATION=19530
	I0829 20:46:13.228449   74174 notify.go:220] Checking for updates...
	I0829 20:46:13.231499   74174 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 20:46:13.232925   74174 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:46:13.234137   74174 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 20:46:13.235281   74174 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 20:46:13.239791   74174 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 20:46:13.241734   74174 config.go:182] Loaded profile config "default-k8s-diff-port-145096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:46:13.241891   74174 config.go:182] Loaded profile config "embed-certs-388383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:46:13.242025   74174 config.go:182] Loaded profile config "no-preload-397724": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:46:13.242138   74174 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 20:46:13.290645   74174 out.go:177] * Using the kvm2 driver based on user configuration
	I0829 20:46:13.292000   74174 start.go:297] selected driver: kvm2
	I0829 20:46:13.292020   74174 start.go:901] validating driver "kvm2" against <nil>
	I0829 20:46:13.292051   74174 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 20:46:13.292838   74174 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 20:46:13.292926   74174 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19530-11185/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 20:46:13.318759   74174 install.go:137] /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0829 20:46:13.318817   74174 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 20:46:13.319098   74174 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:46:13.319135   74174 cni.go:84] Creating CNI manager for ""
	I0829 20:46:13.319147   74174 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:46:13.319156   74174 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 20:46:13.319225   74174 start.go:340] cluster config:
	{Name:auto-801672 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:auto-801672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:46:13.319347   74174 iso.go:125] acquiring lock: {Name:mk1c9d3ac7f423dd4657884e37bdf4359f6328d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 20:46:13.321846   74174 out.go:177] * Starting "auto-801672" primary control-plane node in "auto-801672" cluster
	I0829 20:46:13.323056   74174 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:46:13.323105   74174 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 20:46:13.323117   74174 cache.go:56] Caching tarball of preloaded images
	I0829 20:46:13.323261   74174 preload.go:172] Found /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 20:46:13.323290   74174 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 20:46:13.323417   74174 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/auto-801672/config.json ...
	I0829 20:46:13.323456   74174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/auto-801672/config.json: {Name:mk48497e3d628e56f737e52c4d9d0ef017146d30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:46:13.323669   74174 start.go:360] acquireMachinesLock for auto-801672: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 20:46:13.323725   74174 start.go:364] duration metric: took 41.291µs to acquireMachinesLock for "auto-801672"
	I0829 20:46:13.323747   74174 start.go:93] Provisioning new machine with config: &{Name:auto-801672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.0 ClusterName:auto-801672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 20:46:13.323836   74174 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Aug 29 20:46:13 embed-certs-388383 crio[709]: time="2024-08-29 20:46:13.798106497Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964373798037381,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=594a22e6-6b96-4117-9ac8-a024abea758b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:46:13 embed-certs-388383 crio[709]: time="2024-08-29 20:46:13.798894566Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7a978d0e-b8b5-4b2d-bac1-3cb3972040f0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:13 embed-certs-388383 crio[709]: time="2024-08-29 20:46:13.798993124Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a978d0e-b8b5-4b2d-bac1-3cb3972040f0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:13 embed-certs-388383 crio[709]: time="2024-08-29 20:46:13.799358824Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c,PodSandboxId:a810a1883fa2153dd0ea4d4610ac786f482173217642cb721e9002cd3067cb8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724963208530970511,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 021ca156-b7a8-4647-8efe-db17968fd5a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a10b28433da5bdb319544fbbc9449beb70470488d9e9a102a9ed6c411ba287,PodSandboxId:427a0e97e1cbd12443c2eccf76f1bbf66ba802409a795078ab329e50b2eef553,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724963185645627430,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfe9fc37-9a64-407f-a902-5c1930185329,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71,PodSandboxId:ae023455e0e752eefc42fe1c79d92263baddd8d005d5f884f1cbac804b34944f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963184073926038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dg6t6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e89b20-ebf4-4738-8ca7-9dc2a0e5653a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f,PodSandboxId:3ecac4e426a3fa7f318bd71d405df2ce85fdea202a3c2269a3cc3a1477b47195,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724963176556139708,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fcxs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649b40c8-4f4b-40d1-8
179-baf378d4c7d7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523,PodSandboxId:a810a1883fa2153dd0ea4d4610ac786f482173217642cb721e9002cd3067cb8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724963176569533845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 021ca156-b7a8-4647-8efe-db17968fd5
a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6,PodSandboxId:06254346ea7b63bf3f5e493c87303ff8466c5e760eb6be7a459739c8b6afcdea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724963172056670659,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec2716a5120d1ef3772dcd74efb323d,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334,PodSandboxId:d4eec8556a65326856948490a316f317e48b6432fc8183880a6beeea180729d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724963172045337704,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1415001102c1c7a568af0d1f29aa8cdf,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313,PodSandboxId:db6bfd18987e98a1215e5ccd4fc8a9e4cca3c49a71d7f0c6eee5e32d73e4ab8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724963172033169836,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38b8ff96a68e3d306887164202ee858,},Annotations:map[string]string{io.kubernetes.container.hash: f
72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd,PodSandboxId:d11262c649c5d3ad292919838dbc5b6b048d8c093d6923bebc7ae6a9bcbbe897,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724963172021833091,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293f45c954640c40483589dcd8cdc726,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7a978d0e-b8b5-4b2d-bac1-3cb3972040f0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:13 embed-certs-388383 crio[709]: time="2024-08-29 20:46:13.855545123Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6aaef4bf-09e0-485d-be28-a22581f19eaf name=/runtime.v1.RuntimeService/Version
	Aug 29 20:46:13 embed-certs-388383 crio[709]: time="2024-08-29 20:46:13.855688012Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6aaef4bf-09e0-485d-be28-a22581f19eaf name=/runtime.v1.RuntimeService/Version
	Aug 29 20:46:13 embed-certs-388383 crio[709]: time="2024-08-29 20:46:13.857449921Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ee61208f-21e4-488d-8425-bcb5f5bd45ee name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:46:13 embed-certs-388383 crio[709]: time="2024-08-29 20:46:13.858448254Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964373858412947,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee61208f-21e4-488d-8425-bcb5f5bd45ee name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:46:13 embed-certs-388383 crio[709]: time="2024-08-29 20:46:13.859907251Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=76c64a97-64cd-48b2-b0b7-e41a2011ca82 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:13 embed-certs-388383 crio[709]: time="2024-08-29 20:46:13.860204260Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=76c64a97-64cd-48b2-b0b7-e41a2011ca82 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:13 embed-certs-388383 crio[709]: time="2024-08-29 20:46:13.860729014Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c,PodSandboxId:a810a1883fa2153dd0ea4d4610ac786f482173217642cb721e9002cd3067cb8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724963208530970511,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 021ca156-b7a8-4647-8efe-db17968fd5a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a10b28433da5bdb319544fbbc9449beb70470488d9e9a102a9ed6c411ba287,PodSandboxId:427a0e97e1cbd12443c2eccf76f1bbf66ba802409a795078ab329e50b2eef553,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724963185645627430,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfe9fc37-9a64-407f-a902-5c1930185329,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71,PodSandboxId:ae023455e0e752eefc42fe1c79d92263baddd8d005d5f884f1cbac804b34944f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963184073926038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dg6t6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e89b20-ebf4-4738-8ca7-9dc2a0e5653a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f,PodSandboxId:3ecac4e426a3fa7f318bd71d405df2ce85fdea202a3c2269a3cc3a1477b47195,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724963176556139708,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fcxs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649b40c8-4f4b-40d1-8
179-baf378d4c7d7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523,PodSandboxId:a810a1883fa2153dd0ea4d4610ac786f482173217642cb721e9002cd3067cb8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724963176569533845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 021ca156-b7a8-4647-8efe-db17968fd5
a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6,PodSandboxId:06254346ea7b63bf3f5e493c87303ff8466c5e760eb6be7a459739c8b6afcdea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724963172056670659,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec2716a5120d1ef3772dcd74efb323d,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334,PodSandboxId:d4eec8556a65326856948490a316f317e48b6432fc8183880a6beeea180729d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724963172045337704,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1415001102c1c7a568af0d1f29aa8cdf,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313,PodSandboxId:db6bfd18987e98a1215e5ccd4fc8a9e4cca3c49a71d7f0c6eee5e32d73e4ab8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724963172033169836,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38b8ff96a68e3d306887164202ee858,},Annotations:map[string]string{io.kubernetes.container.hash: f
72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd,PodSandboxId:d11262c649c5d3ad292919838dbc5b6b048d8c093d6923bebc7ae6a9bcbbe897,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724963172021833091,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293f45c954640c40483589dcd8cdc726,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=76c64a97-64cd-48b2-b0b7-e41a2011ca82 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:13 embed-certs-388383 crio[709]: time="2024-08-29 20:46:13.923449325Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=37b165da-a643-46e2-a400-9dff9e31cf2b name=/runtime.v1.RuntimeService/Version
	Aug 29 20:46:13 embed-certs-388383 crio[709]: time="2024-08-29 20:46:13.923578752Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=37b165da-a643-46e2-a400-9dff9e31cf2b name=/runtime.v1.RuntimeService/Version
	Aug 29 20:46:13 embed-certs-388383 crio[709]: time="2024-08-29 20:46:13.925191086Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=45420d05-f8f3-4729-8f91-2cb18219a45b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:46:13 embed-certs-388383 crio[709]: time="2024-08-29 20:46:13.925831587Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964373925802699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45420d05-f8f3-4729-8f91-2cb18219a45b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:46:13 embed-certs-388383 crio[709]: time="2024-08-29 20:46:13.926697797Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4eaedc2-d92f-4fba-b3d1-c71e1225f74b name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:13 embed-certs-388383 crio[709]: time="2024-08-29 20:46:13.926775703Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4eaedc2-d92f-4fba-b3d1-c71e1225f74b name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:13 embed-certs-388383 crio[709]: time="2024-08-29 20:46:13.927025750Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c,PodSandboxId:a810a1883fa2153dd0ea4d4610ac786f482173217642cb721e9002cd3067cb8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724963208530970511,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 021ca156-b7a8-4647-8efe-db17968fd5a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a10b28433da5bdb319544fbbc9449beb70470488d9e9a102a9ed6c411ba287,PodSandboxId:427a0e97e1cbd12443c2eccf76f1bbf66ba802409a795078ab329e50b2eef553,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724963185645627430,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfe9fc37-9a64-407f-a902-5c1930185329,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71,PodSandboxId:ae023455e0e752eefc42fe1c79d92263baddd8d005d5f884f1cbac804b34944f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963184073926038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dg6t6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e89b20-ebf4-4738-8ca7-9dc2a0e5653a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f,PodSandboxId:3ecac4e426a3fa7f318bd71d405df2ce85fdea202a3c2269a3cc3a1477b47195,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724963176556139708,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fcxs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649b40c8-4f4b-40d1-8
179-baf378d4c7d7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523,PodSandboxId:a810a1883fa2153dd0ea4d4610ac786f482173217642cb721e9002cd3067cb8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724963176569533845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 021ca156-b7a8-4647-8efe-db17968fd5
a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6,PodSandboxId:06254346ea7b63bf3f5e493c87303ff8466c5e760eb6be7a459739c8b6afcdea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724963172056670659,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec2716a5120d1ef3772dcd74efb323d,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334,PodSandboxId:d4eec8556a65326856948490a316f317e48b6432fc8183880a6beeea180729d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724963172045337704,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1415001102c1c7a568af0d1f29aa8cdf,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313,PodSandboxId:db6bfd18987e98a1215e5ccd4fc8a9e4cca3c49a71d7f0c6eee5e32d73e4ab8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724963172033169836,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38b8ff96a68e3d306887164202ee858,},Annotations:map[string]string{io.kubernetes.container.hash: f
72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd,PodSandboxId:d11262c649c5d3ad292919838dbc5b6b048d8c093d6923bebc7ae6a9bcbbe897,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724963172021833091,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293f45c954640c40483589dcd8cdc726,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4eaedc2-d92f-4fba-b3d1-c71e1225f74b name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:13 embed-certs-388383 crio[709]: time="2024-08-29 20:46:13.977224471Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a5a4080e-128a-40fe-b8f3-2b0fddf5d356 name=/runtime.v1.RuntimeService/Version
	Aug 29 20:46:13 embed-certs-388383 crio[709]: time="2024-08-29 20:46:13.977426200Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a5a4080e-128a-40fe-b8f3-2b0fddf5d356 name=/runtime.v1.RuntimeService/Version
	Aug 29 20:46:13 embed-certs-388383 crio[709]: time="2024-08-29 20:46:13.978860689Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=389296ef-a06b-480d-a83f-16fcd7c66fe0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:46:13 embed-certs-388383 crio[709]: time="2024-08-29 20:46:13.979458721Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964373979428404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=389296ef-a06b-480d-a83f-16fcd7c66fe0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:46:13 embed-certs-388383 crio[709]: time="2024-08-29 20:46:13.980523413Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3bb9159d-8cab-4bad-b59e-f7ddf69d4716 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:13 embed-certs-388383 crio[709]: time="2024-08-29 20:46:13.980620389Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3bb9159d-8cab-4bad-b59e-f7ddf69d4716 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:13 embed-certs-388383 crio[709]: time="2024-08-29 20:46:13.980873255Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c,PodSandboxId:a810a1883fa2153dd0ea4d4610ac786f482173217642cb721e9002cd3067cb8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724963208530970511,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 021ca156-b7a8-4647-8efe-db17968fd5a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a10b28433da5bdb319544fbbc9449beb70470488d9e9a102a9ed6c411ba287,PodSandboxId:427a0e97e1cbd12443c2eccf76f1bbf66ba802409a795078ab329e50b2eef553,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724963185645627430,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfe9fc37-9a64-407f-a902-5c1930185329,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71,PodSandboxId:ae023455e0e752eefc42fe1c79d92263baddd8d005d5f884f1cbac804b34944f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963184073926038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dg6t6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e89b20-ebf4-4738-8ca7-9dc2a0e5653a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f,PodSandboxId:3ecac4e426a3fa7f318bd71d405df2ce85fdea202a3c2269a3cc3a1477b47195,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724963176556139708,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fcxs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649b40c8-4f4b-40d1-8
179-baf378d4c7d7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523,PodSandboxId:a810a1883fa2153dd0ea4d4610ac786f482173217642cb721e9002cd3067cb8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724963176569533845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 021ca156-b7a8-4647-8efe-db17968fd5
a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6,PodSandboxId:06254346ea7b63bf3f5e493c87303ff8466c5e760eb6be7a459739c8b6afcdea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724963172056670659,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec2716a5120d1ef3772dcd74efb323d,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334,PodSandboxId:d4eec8556a65326856948490a316f317e48b6432fc8183880a6beeea180729d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724963172045337704,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1415001102c1c7a568af0d1f29aa8cdf,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313,PodSandboxId:db6bfd18987e98a1215e5ccd4fc8a9e4cca3c49a71d7f0c6eee5e32d73e4ab8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724963172033169836,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38b8ff96a68e3d306887164202ee858,},Annotations:map[string]string{io.kubernetes.container.hash: f
72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd,PodSandboxId:d11262c649c5d3ad292919838dbc5b6b048d8c093d6923bebc7ae6a9bcbbe897,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724963172021833091,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-388383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293f45c954640c40483589dcd8cdc726,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3bb9159d-8cab-4bad-b59e-f7ddf69d4716 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	668d380506744       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   a810a1883fa21       storage-provisioner
	f7a10b28433da       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   427a0e97e1cbd       busybox
	64cc61492bb7f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Running             coredns                   1                   ae023455e0e75       coredns-6f6b679f8f-dg6t6
	585208cde484f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   a810a1883fa21       storage-provisioner
	05148cf016224       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      19 minutes ago      Running             kube-proxy                1                   3ecac4e426a3f       kube-proxy-fcxs4
	5ea75e14a71df       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      20 minutes ago      Running             etcd                      1                   06254346ea7b6       etcd-embed-certs-388383
	daeb4a7c3dc70       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      20 minutes ago      Running             kube-scheduler            1                   d4eec8556a653       kube-scheduler-embed-certs-388383
	f2c67cb1f348e       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      20 minutes ago      Running             kube-apiserver            1                   db6bfd18987e9       kube-apiserver-embed-certs-388383
	29d4eb837325f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      20 minutes ago      Running             kube-controller-manager   1                   d11262c649c5d       kube-controller-manager-embed-certs-388383
	
	
	==> coredns [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48834 - 59883 "HINFO IN 2135944862837064231.3484080705451116333. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032148635s
	
	
	==> describe nodes <==
	Name:               embed-certs-388383
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-388383
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=embed-certs-388383
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T20_17_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 20:17:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-388383
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 20:46:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 20:42:06 +0000   Thu, 29 Aug 2024 20:17:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 20:42:06 +0000   Thu, 29 Aug 2024 20:17:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 20:42:06 +0000   Thu, 29 Aug 2024 20:17:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 20:42:06 +0000   Thu, 29 Aug 2024 20:26:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.202
	  Hostname:    embed-certs-388383
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 398852be7d3640a4ab85ace9fbdd5515
	  System UUID:                398852be-7d36-40a4-ab85-ace9fbdd5515
	  Boot ID:                    90ecbdb3-55b2-4488-bb5c-67a64288f400
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 coredns-6f6b679f8f-dg6t6                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-embed-certs-388383                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kube-apiserver-embed-certs-388383             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-embed-certs-388383    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-fcxs4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-embed-certs-388383             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 metrics-server-6867b74b74-mx5jh               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         27m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m                kubelet          Node embed-certs-388383 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node embed-certs-388383 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node embed-certs-388383 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node embed-certs-388383 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node embed-certs-388383 event: Registered Node embed-certs-388383 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node embed-certs-388383 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node embed-certs-388383 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node embed-certs-388383 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node embed-certs-388383 event: Registered Node embed-certs-388383 in Controller
	
	
	==> dmesg <==
	[Aug29 20:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050918] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040117] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.770046] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.547750] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.606634] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug29 20:26] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.061857] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055711] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.195616] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.127657] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.298931] systemd-fstab-generator[701]: Ignoring "noauto" option for root device
	[  +4.136583] systemd-fstab-generator[792]: Ignoring "noauto" option for root device
	[  +2.083941] systemd-fstab-generator[910]: Ignoring "noauto" option for root device
	[  +0.060494] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.522480] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.451771] systemd-fstab-generator[1549]: Ignoring "noauto" option for root device
	[  +4.332363] kauditd_printk_skb: 82 callbacks suppressed
	[ +25.297135] kauditd_printk_skb: 31 callbacks suppressed
	
	
	==> etcd [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6] <==
	{"level":"info","ts":"2024-08-29T20:26:48.145023Z","caller":"traceutil/trace.go:171","msg":"trace[1560337123] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:620; }","duration":"1.198639032s","start":"2024-08-29T20:26:46.946372Z","end":"2024-08-29T20:26:48.145011Z","steps":["trace[1560337123] 'agreement among raft nodes before linearized reading'  (duration: 1.194321091s)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T20:26:48.145332Z","caller":"traceutil/trace.go:171","msg":"trace[830044130] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:620; }","duration":"238.158903ms","start":"2024-08-29T20:26:47.907159Z","end":"2024-08-29T20:26:48.145318Z","steps":["trace[830044130] 'agreement among raft nodes before linearized reading'  (duration: 233.368844ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T20:26:48.145518Z","caller":"traceutil/trace.go:171","msg":"trace[1283446003] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-mx5jh; range_end:; response_count:1; response_revision:620; }","duration":"935.10141ms","start":"2024-08-29T20:26:47.210408Z","end":"2024-08-29T20:26:48.145509Z","steps":["trace[1283446003] 'agreement among raft nodes before linearized reading'  (duration: 930.175972ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T20:26:48.145961Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T20:26:47.210394Z","time spent":"935.553645ms","remote":"127.0.0.1:55348","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4366,"request content":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-mx5jh\" "}
	{"level":"info","ts":"2024-08-29T20:26:48.145537Z","caller":"traceutil/trace.go:171","msg":"trace[1708817671] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-6867b74b74-mx5jh.17f04cdfc7cf69e3; range_end:; response_count:1; response_revision:620; }","duration":"935.319995ms","start":"2024-08-29T20:26:47.210213Z","end":"2024-08-29T20:26:48.145533Z","steps":["trace[1708817671] 'agreement among raft nodes before linearized reading'  (duration: 930.425681ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T20:26:48.146307Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T20:26:47.210182Z","time spent":"936.115479ms","remote":"127.0.0.1:55254","response type":"/etcdserverpb.KV/Range","request count":0,"request size":79,"response count":1,"response size":852,"request content":"key:\"/registry/events/kube-system/metrics-server-6867b74b74-mx5jh.17f04cdfc7cf69e3\" "}
	{"level":"warn","ts":"2024-08-29T20:26:48.145583Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T20:26:46.678436Z","time spent":"1.467134289s","remote":"127.0.0.1:55344","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":5770,"request content":"key:\"/registry/minions/embed-certs-388383\" "}
	{"level":"info","ts":"2024-08-29T20:26:48.659503Z","caller":"traceutil/trace.go:171","msg":"trace[453438509] linearizableReadLoop","detail":"{readStateIndex:662; appliedIndex:661; }","duration":"510.155794ms","start":"2024-08-29T20:26:48.149327Z","end":"2024-08-29T20:26:48.659482Z","steps":["trace[453438509] 'read index received'  (duration: 412.189363ms)","trace[453438509] 'applied index is now lower than readState.Index'  (duration: 97.965212ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-29T20:26:48.659517Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T20:26:48.147571Z","time spent":"511.932748ms","remote":"127.0.0.1:55194","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-08-29T20:26:48.659652Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"510.304996ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T20:26:48.659690Z","caller":"traceutil/trace.go:171","msg":"trace[1426582622] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:620; }","duration":"510.354713ms","start":"2024-08-29T20:26:48.149324Z","end":"2024-08-29T20:26:48.659679Z","steps":["trace[1426582622] 'agreement among raft nodes before linearized reading'  (duration: 510.225366ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T20:26:48.659717Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T20:26:48.149299Z","time spent":"510.410674ms","remote":"127.0.0.1:55170","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-08-29T20:26:48.663733Z","caller":"traceutil/trace.go:171","msg":"trace[100398898] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"514.252712ms","start":"2024-08-29T20:26:48.149460Z","end":"2024-08-29T20:26:48.663713Z","steps":["trace[100398898] 'process raft request'  (duration: 513.838939ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T20:26:48.663774Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"513.43914ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-mx5jh\" ","response":"range_response_count:1 size:4386"}
	{"level":"info","ts":"2024-08-29T20:26:48.663807Z","caller":"traceutil/trace.go:171","msg":"trace[1582031213] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-mx5jh; range_end:; response_count:1; response_revision:622; }","duration":"513.476746ms","start":"2024-08-29T20:26:48.150320Z","end":"2024-08-29T20:26:48.663797Z","steps":["trace[1582031213] 'agreement among raft nodes before linearized reading'  (duration: 513.394667ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T20:26:48.663834Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T20:26:48.149446Z","time spent":"514.324308ms","remote":"127.0.0.1:55254","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":813,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-6867b74b74-mx5jh.17f04cdfc7cf69e3\" mod_revision:576 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-6867b74b74-mx5jh.17f04cdfc7cf69e3\" value_size:718 lease:8048937074636695199 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-6867b74b74-mx5jh.17f04cdfc7cf69e3\" > >"}
	{"level":"warn","ts":"2024-08-29T20:26:48.663843Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T20:26:48.150225Z","time spent":"513.603255ms","remote":"127.0.0.1:55348","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4410,"request content":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-mx5jh\" "}
	{"level":"info","ts":"2024-08-29T20:26:48.664009Z","caller":"traceutil/trace.go:171","msg":"trace[1022135091] transaction","detail":"{read_only:false; response_revision:622; number_of_response:1; }","duration":"511.933178ms","start":"2024-08-29T20:26:48.152069Z","end":"2024-08-29T20:26:48.664002Z","steps":["trace[1022135091] 'process raft request'  (duration: 511.589838ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T20:26:48.664050Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T20:26:48.152059Z","time spent":"511.964162ms","remote":"127.0.0.1:55348","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4371,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-6867b74b74-mx5jh\" mod_revision:612 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-6867b74b74-mx5jh\" value_size:4305 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-6867b74b74-mx5jh\" > >"}
	{"level":"info","ts":"2024-08-29T20:36:14.482702Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":854}
	{"level":"info","ts":"2024-08-29T20:36:14.492737Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":854,"took":"9.747755ms","hash":373481181,"current-db-size-bytes":2650112,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2650112,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-08-29T20:36:14.492794Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":373481181,"revision":854,"compact-revision":-1}
	{"level":"info","ts":"2024-08-29T20:41:14.490305Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1096}
	{"level":"info","ts":"2024-08-29T20:41:14.494321Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1096,"took":"3.699996ms","hash":45855365,"current-db-size-bytes":2650112,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1581056,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-29T20:41:14.494369Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":45855365,"revision":1096,"compact-revision":854}
	
	
	==> kernel <==
	 20:46:14 up 20 min,  0 users,  load average: 0.15, 0.27, 0.17
	Linux embed-certs-388383 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313] <==
	E0829 20:41:16.720747       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0829 20:41:16.720758       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0829 20:41:16.721916       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 20:41:16.721943       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0829 20:42:16.722387       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 20:42:16.722612       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0829 20:42:16.722719       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 20:42:16.722762       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0829 20:42:16.723807       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 20:42:16.723880       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0829 20:44:16.724349       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 20:44:16.724476       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0829 20:44:16.724408       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 20:44:16.724807       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0829 20:44:16.725638       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 20:44:16.725915       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd] <==
	E0829 20:40:49.482974       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:40:49.927031       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:41:19.489606       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:41:19.933762       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:41:49.496031       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:41:49.940931       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0829 20:42:06.241813       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-388383"
	E0829 20:42:19.503022       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:42:19.949859       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0829 20:42:24.229219       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="285.534µs"
	I0829 20:42:35.225534       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="130.658µs"
	E0829 20:42:49.509325       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:42:49.957490       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:43:19.515900       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:43:19.964695       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:43:49.522866       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:43:49.974018       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:44:19.529038       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:44:19.981804       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:44:49.535545       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:44:49.988768       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:45:19.541791       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:45:19.997018       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:45:49.550581       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:45:50.005585       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 20:26:16.925862       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 20:26:16.933452       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.202"]
	E0829 20:26:16.933527       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 20:26:16.974796       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 20:26:16.974838       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 20:26:16.974868       1 server_linux.go:169] "Using iptables Proxier"
	I0829 20:26:16.978543       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 20:26:16.978800       1 server.go:483] "Version info" version="v1.31.0"
	I0829 20:26:16.978828       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 20:26:16.980316       1 config.go:197] "Starting service config controller"
	I0829 20:26:16.980357       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 20:26:16.980382       1 config.go:104] "Starting endpoint slice config controller"
	I0829 20:26:16.980386       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 20:26:16.980902       1 config.go:326] "Starting node config controller"
	I0829 20:26:16.980937       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 20:26:17.080883       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 20:26:17.080992       1 shared_informer.go:320] Caches are synced for service config
	I0829 20:26:17.081055       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334] <==
	I0829 20:26:13.344103       1 serving.go:386] Generated self-signed cert in-memory
	W0829 20:26:15.688140       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0829 20:26:15.688329       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0829 20:26:15.688360       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0829 20:26:15.688447       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0829 20:26:15.731810       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0829 20:26:15.732169       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 20:26:15.734472       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0829 20:26:15.734562       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0829 20:26:15.734627       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0829 20:26:15.734704       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0829 20:26:15.835017       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 20:45:10 embed-certs-388383 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 20:45:10 embed-certs-388383 kubelet[917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 20:45:10 embed-certs-388383 kubelet[917]: E0829 20:45:10.461739     917 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964310461475124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:45:10 embed-certs-388383 kubelet[917]: E0829 20:45:10.461790     917 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964310461475124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:45:19 embed-certs-388383 kubelet[917]: E0829 20:45:19.207331     917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mx5jh" podUID="99e21acd-b7b8-4e6f-8c75-c112206aed89"
	Aug 29 20:45:20 embed-certs-388383 kubelet[917]: E0829 20:45:20.463294     917 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964320462959802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:45:20 embed-certs-388383 kubelet[917]: E0829 20:45:20.463740     917 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964320462959802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:45:30 embed-certs-388383 kubelet[917]: E0829 20:45:30.466401     917 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964330465948280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:45:30 embed-certs-388383 kubelet[917]: E0829 20:45:30.466447     917 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964330465948280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:45:34 embed-certs-388383 kubelet[917]: E0829 20:45:34.209535     917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mx5jh" podUID="99e21acd-b7b8-4e6f-8c75-c112206aed89"
	Aug 29 20:45:40 embed-certs-388383 kubelet[917]: E0829 20:45:40.469284     917 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964340468663415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:45:40 embed-certs-388383 kubelet[917]: E0829 20:45:40.469344     917 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964340468663415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:45:45 embed-certs-388383 kubelet[917]: E0829 20:45:45.208049     917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mx5jh" podUID="99e21acd-b7b8-4e6f-8c75-c112206aed89"
	Aug 29 20:45:50 embed-certs-388383 kubelet[917]: E0829 20:45:50.471660     917 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964350470761226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:45:50 embed-certs-388383 kubelet[917]: E0829 20:45:50.472017     917 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964350470761226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:46:00 embed-certs-388383 kubelet[917]: E0829 20:46:00.207589     917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mx5jh" podUID="99e21acd-b7b8-4e6f-8c75-c112206aed89"
	Aug 29 20:46:00 embed-certs-388383 kubelet[917]: E0829 20:46:00.475372     917 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964360474510677,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:46:00 embed-certs-388383 kubelet[917]: E0829 20:46:00.475413     917 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964360474510677,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:46:10 embed-certs-388383 kubelet[917]: E0829 20:46:10.223156     917 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 20:46:10 embed-certs-388383 kubelet[917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 20:46:10 embed-certs-388383 kubelet[917]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 20:46:10 embed-certs-388383 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 20:46:10 embed-certs-388383 kubelet[917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 20:46:10 embed-certs-388383 kubelet[917]: E0829 20:46:10.477673     917 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964370476692585,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:46:10 embed-certs-388383 kubelet[917]: E0829 20:46:10.477722     917 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964370476692585,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523] <==
	I0829 20:26:16.786895       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0829 20:26:46.801383       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c] <==
	I0829 20:26:48.730744       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 20:26:48.748700       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 20:26:48.748852       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 20:27:06.157398       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 20:27:06.157737       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-388383_05829032-f530-4263-8f6f-0a3f3f283ef4!
	I0829 20:27:06.161462       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"734cb345-acaf-4d89-995f-0550044e7554", APIVersion:"v1", ResourceVersion:"637", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-388383_05829032-f530-4263-8f6f-0a3f3f283ef4 became leader
	I0829 20:27:06.258390       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-388383_05829032-f530-4263-8f6f-0a3f3f283ef4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-388383 -n embed-certs-388383
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-388383 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-mx5jh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-388383 describe pod metrics-server-6867b74b74-mx5jh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-388383 describe pod metrics-server-6867b74b74-mx5jh: exit status 1 (63.166277ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-mx5jh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-388383 describe pod metrics-server-6867b74b74-mx5jh: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (386.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-145096 -n default-k8s-diff-port-145096
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-29 20:50:00.776709366 +0000 UTC m=+6864.947170610
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-145096 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-145096 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (63.859027ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): namespaces "kubernetes-dashboard" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-145096 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-145096 -n default-k8s-diff-port-145096
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-145096 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-145096 logs -n 25: (1.205802159s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-801672 sudo                        | custom-flannel-801672 | jenkins | v1.33.1 | 29 Aug 24 20:49 UTC | 29 Aug 24 20:49 UTC |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-801672                             | custom-flannel-801672 | jenkins | v1.33.1 | 29 Aug 24 20:49 UTC | 29 Aug 24 20:49 UTC |
	|         | sudo systemctl cat kubelet                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-801672 sudo                        | custom-flannel-801672 | jenkins | v1.33.1 | 29 Aug 24 20:49 UTC | 29 Aug 24 20:49 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-801672                             | custom-flannel-801672 | jenkins | v1.33.1 | 29 Aug 24 20:49 UTC | 29 Aug 24 20:49 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-801672                             | custom-flannel-801672 | jenkins | v1.33.1 | 29 Aug 24 20:49 UTC | 29 Aug 24 20:49 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-801672 sudo                        | custom-flannel-801672 | jenkins | v1.33.1 | 29 Aug 24 20:49 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-801672                             | custom-flannel-801672 | jenkins | v1.33.1 | 29 Aug 24 20:49 UTC | 29 Aug 24 20:49 UTC |
	|         | sudo systemctl cat docker                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-801672 sudo                        | custom-flannel-801672 | jenkins | v1.33.1 | 29 Aug 24 20:49 UTC | 29 Aug 24 20:49 UTC |
	|         | cat /etc/docker/daemon.json                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-801672 sudo                        | custom-flannel-801672 | jenkins | v1.33.1 | 29 Aug 24 20:49 UTC |                     |
	|         | docker system info                                   |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-801672 sudo                        | custom-flannel-801672 | jenkins | v1.33.1 | 29 Aug 24 20:49 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-801672                             | custom-flannel-801672 | jenkins | v1.33.1 | 29 Aug 24 20:49 UTC | 29 Aug 24 20:49 UTC |
	|         | sudo systemctl cat cri-docker                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-801672 sudo cat                    | custom-flannel-801672 | jenkins | v1.33.1 | 29 Aug 24 20:49 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-801672 sudo cat                    | custom-flannel-801672 | jenkins | v1.33.1 | 29 Aug 24 20:49 UTC | 29 Aug 24 20:49 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-801672 sudo                        | custom-flannel-801672 | jenkins | v1.33.1 | 29 Aug 24 20:49 UTC | 29 Aug 24 20:49 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-801672 sudo                        | custom-flannel-801672 | jenkins | v1.33.1 | 29 Aug 24 20:49 UTC |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-801672                             | custom-flannel-801672 | jenkins | v1.33.1 | 29 Aug 24 20:49 UTC | 29 Aug 24 20:49 UTC |
	|         | sudo systemctl cat containerd                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-801672 sudo cat                    | custom-flannel-801672 | jenkins | v1.33.1 | 29 Aug 24 20:49 UTC | 29 Aug 24 20:49 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-801672                             | custom-flannel-801672 | jenkins | v1.33.1 | 29 Aug 24 20:49 UTC | 29 Aug 24 20:49 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-801672 sudo                        | custom-flannel-801672 | jenkins | v1.33.1 | 29 Aug 24 20:49 UTC | 29 Aug 24 20:49 UTC |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-801672 sudo                        | custom-flannel-801672 | jenkins | v1.33.1 | 29 Aug 24 20:49 UTC | 29 Aug 24 20:49 UTC |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-801672 sudo                        | custom-flannel-801672 | jenkins | v1.33.1 | 29 Aug 24 20:49 UTC | 29 Aug 24 20:49 UTC |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-801672 sudo                        | custom-flannel-801672 | jenkins | v1.33.1 | 29 Aug 24 20:49 UTC | 29 Aug 24 20:49 UTC |
	|         | find /etc/crio -type f -exec                         |                       |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-801672 sudo                        | custom-flannel-801672 | jenkins | v1.33.1 | 29 Aug 24 20:49 UTC | 29 Aug 24 20:49 UTC |
	|         | crio config                                          |                       |         |         |                     |                     |
	| delete  | -p custom-flannel-801672                             | custom-flannel-801672 | jenkins | v1.33.1 | 29 Aug 24 20:49 UTC | 29 Aug 24 20:49 UTC |
	| start   | -p bridge-801672 --memory=3072                       | bridge-801672         | jenkins | v1.33.1 | 29 Aug 24 20:49 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                       |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                       |         |         |                     |                     |
	|         | --cni=bridge --driver=kvm2                           |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 20:49:45
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 20:49:45.001121   82036 out.go:345] Setting OutFile to fd 1 ...
	I0829 20:49:45.001557   82036 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:49:45.001615   82036 out.go:358] Setting ErrFile to fd 2...
	I0829 20:49:45.001633   82036 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:49:45.001975   82036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 20:49:45.002906   82036 out.go:352] Setting JSON to false
	I0829 20:49:45.004471   82036 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9132,"bootTime":1724955453,"procs":308,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 20:49:45.004591   82036 start.go:139] virtualization: kvm guest
	I0829 20:49:44.966323   80107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:49:44.980665   80107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42331
	I0829 20:49:44.980873   80107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36705
	I0829 20:49:44.981141   80107 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:49:44.981595   80107 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:49:44.981776   80107 main.go:141] libmachine: Using API Version  1
	I0829 20:49:44.981799   80107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:49:44.982117   80107 main.go:141] libmachine: Using API Version  1
	I0829 20:49:44.982137   80107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:49:44.982196   80107 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:49:44.982400   80107 main.go:141] libmachine: (flannel-801672) Calling .GetState
	I0829 20:49:44.982432   80107 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:49:44.982974   80107 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:49:44.983287   80107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:49:44.986495   80107 addons.go:234] Setting addon default-storageclass=true in "flannel-801672"
	I0829 20:49:44.986565   80107 host.go:66] Checking if "flannel-801672" exists ...
	I0829 20:49:44.986950   80107 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:49:44.986998   80107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:49:45.002222   80107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42251
	I0829 20:49:45.002699   80107 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:49:45.003217   80107 main.go:141] libmachine: Using API Version  1
	I0829 20:49:45.003244   80107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:49:45.003630   80107 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:49:45.003823   80107 main.go:141] libmachine: (flannel-801672) Calling .GetState
	I0829 20:49:45.005637   80107 main.go:141] libmachine: (flannel-801672) Calling .DriverName
	I0829 20:49:45.007085   82036 out.go:177] * [bridge-801672] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 20:49:45.008067   80107 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:49:45.009164   82036 notify.go:220] Checking for updates...
	I0829 20:49:45.009844   82036 out.go:177]   - MINIKUBE_LOCATION=19530
	I0829 20:49:45.011599   82036 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 20:49:45.012972   82036 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:49:45.014718   82036 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 20:49:45.016588   82036 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 20:49:45.017950   82036 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 20:49:45.019921   82036 config.go:182] Loaded profile config "default-k8s-diff-port-145096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:49:45.020232   82036 config.go:182] Loaded profile config "enable-default-cni-801672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:49:45.020358   82036 config.go:182] Loaded profile config "flannel-801672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:49:45.020652   82036 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 20:49:45.063336   82036 out.go:177] * Using the kvm2 driver based on user configuration
	I0829 20:49:45.064975   82036 start.go:297] selected driver: kvm2
	I0829 20:49:45.064989   82036 start.go:901] validating driver "kvm2" against <nil>
	I0829 20:49:45.065003   82036 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 20:49:45.065728   82036 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 20:49:45.065811   82036 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19530-11185/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 20:49:45.082333   82036 install.go:137] /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0829 20:49:45.082388   82036 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 20:49:45.082701   82036 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:49:45.082776   82036 cni.go:84] Creating CNI manager for "bridge"
	I0829 20:49:45.082794   82036 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 20:49:45.082862   82036 start.go:340] cluster config:
	{Name:bridge-801672 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:bridge-801672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:49:45.082997   82036 iso.go:125] acquiring lock: {Name:mk1c9d3ac7f423dd4657884e37bdf4359f6328d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 20:49:45.085659   82036 out.go:177] * Starting "bridge-801672" primary control-plane node in "bridge-801672" cluster
	I0829 20:49:45.086688   82036 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:49:45.086721   82036 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 20:49:45.086733   82036 cache.go:56] Caching tarball of preloaded images
	I0829 20:49:45.086814   82036 preload.go:172] Found /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 20:49:45.086828   82036 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 20:49:45.086921   82036 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/bridge-801672/config.json ...
	I0829 20:49:45.086945   82036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/bridge-801672/config.json: {Name:mk4936626a2051d56f1938eb62b996323b51a4e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:49:45.087092   82036 start.go:360] acquireMachinesLock for bridge-801672: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 20:49:45.087125   82036 start.go:364] duration metric: took 18.248µs to acquireMachinesLock for "bridge-801672"
	I0829 20:49:45.087142   82036 start.go:93] Provisioning new machine with config: &{Name:bridge-801672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:bridge-801672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 20:49:45.087232   82036 start.go:125] createHost starting for "" (driver="kvm2")
	I0829 20:49:45.009811   80107 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:49:45.009826   80107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 20:49:45.009842   80107 main.go:141] libmachine: (flannel-801672) Calling .GetSSHHostname
	I0829 20:49:45.015602   80107 main.go:141] libmachine: (flannel-801672) DBG | domain flannel-801672 has defined MAC address 52:54:00:3a:3f:6f in network mk-flannel-801672
	I0829 20:49:45.016070   80107 main.go:141] libmachine: (flannel-801672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:3f:6f", ip: ""} in network mk-flannel-801672: {Iface:virbr4 ExpiryTime:2024-08-29 21:49:13 +0000 UTC Type:0 Mac:52:54:00:3a:3f:6f Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:flannel-801672 Clientid:01:52:54:00:3a:3f:6f}
	I0829 20:49:45.016110   80107 main.go:141] libmachine: (flannel-801672) DBG | domain flannel-801672 has defined IP address 192.168.61.58 and MAC address 52:54:00:3a:3f:6f in network mk-flannel-801672
	I0829 20:49:45.016259   80107 main.go:141] libmachine: (flannel-801672) Calling .GetSSHPort
	I0829 20:49:45.016622   80107 main.go:141] libmachine: (flannel-801672) Calling .GetSSHKeyPath
	I0829 20:49:45.016824   80107 main.go:141] libmachine: (flannel-801672) Calling .GetSSHUsername
	I0829 20:49:45.016982   80107 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/flannel-801672/id_rsa Username:docker}
	I0829 20:49:45.021180   80107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41857
	I0829 20:49:45.021643   80107 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:49:45.022150   80107 main.go:141] libmachine: Using API Version  1
	I0829 20:49:45.022173   80107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:49:45.022485   80107 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:49:45.022916   80107 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:49:45.022953   80107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:49:45.039410   80107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43467
	I0829 20:49:45.039956   80107 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:49:45.040537   80107 main.go:141] libmachine: Using API Version  1
	I0829 20:49:45.040563   80107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:49:45.040893   80107 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:49:45.041086   80107 main.go:141] libmachine: (flannel-801672) Calling .GetState
	I0829 20:49:45.044173   80107 main.go:141] libmachine: (flannel-801672) Calling .DriverName
	I0829 20:49:45.044387   80107 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 20:49:45.044402   80107 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 20:49:45.044419   80107 main.go:141] libmachine: (flannel-801672) Calling .GetSSHHostname
	I0829 20:49:45.048292   80107 main.go:141] libmachine: (flannel-801672) DBG | domain flannel-801672 has defined MAC address 52:54:00:3a:3f:6f in network mk-flannel-801672
	I0829 20:49:45.048822   80107 main.go:141] libmachine: (flannel-801672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:3f:6f", ip: ""} in network mk-flannel-801672: {Iface:virbr4 ExpiryTime:2024-08-29 21:49:13 +0000 UTC Type:0 Mac:52:54:00:3a:3f:6f Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:flannel-801672 Clientid:01:52:54:00:3a:3f:6f}
	I0829 20:49:45.048853   80107 main.go:141] libmachine: (flannel-801672) DBG | domain flannel-801672 has defined IP address 192.168.61.58 and MAC address 52:54:00:3a:3f:6f in network mk-flannel-801672
	I0829 20:49:45.049403   80107 main.go:141] libmachine: (flannel-801672) Calling .GetSSHPort
	I0829 20:49:45.049591   80107 main.go:141] libmachine: (flannel-801672) Calling .GetSSHKeyPath
	I0829 20:49:45.049773   80107 main.go:141] libmachine: (flannel-801672) Calling .GetSSHUsername
	I0829 20:49:45.049910   80107 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/flannel-801672/id_rsa Username:docker}
	I0829 20:49:45.335283   80107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:49:45.351276   80107 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:49:45.351276   80107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0829 20:49:45.367704   80107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 20:49:46.004046   80107 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0829 20:49:46.004112   80107 main.go:141] libmachine: Making call to close driver server
	I0829 20:49:46.004134   80107 main.go:141] libmachine: (flannel-801672) Calling .Close
	I0829 20:49:46.004586   80107 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:49:46.004611   80107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:49:46.004596   80107 main.go:141] libmachine: (flannel-801672) DBG | Closing plugin on server side
	I0829 20:49:46.004620   80107 main.go:141] libmachine: Making call to close driver server
	I0829 20:49:46.004628   80107 main.go:141] libmachine: (flannel-801672) Calling .Close
	I0829 20:49:46.004858   80107 main.go:141] libmachine: (flannel-801672) DBG | Closing plugin on server side
	I0829 20:49:46.004892   80107 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:49:46.004904   80107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:49:46.005316   80107 node_ready.go:35] waiting up to 15m0s for node "flannel-801672" to be "Ready" ...
	I0829 20:49:46.007751   80107 main.go:141] libmachine: Making call to close driver server
	I0829 20:49:46.007766   80107 main.go:141] libmachine: (flannel-801672) Calling .Close
	I0829 20:49:46.008019   80107 main.go:141] libmachine: (flannel-801672) DBG | Closing plugin on server side
	I0829 20:49:46.008058   80107 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:49:46.008067   80107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:49:46.008085   80107 main.go:141] libmachine: Making call to close driver server
	I0829 20:49:46.008103   80107 main.go:141] libmachine: (flannel-801672) Calling .Close
	I0829 20:49:46.009871   80107 main.go:141] libmachine: (flannel-801672) DBG | Closing plugin on server side
	I0829 20:49:46.009878   80107 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:49:46.009902   80107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:49:46.036497   80107 main.go:141] libmachine: Making call to close driver server
	I0829 20:49:46.036531   80107 main.go:141] libmachine: (flannel-801672) Calling .Close
	I0829 20:49:46.036843   80107 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:49:46.036859   80107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:49:46.038573   80107 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0829 20:49:46.039972   80107 addons.go:510] duration metric: took 1.077868159s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0829 20:49:46.509881   80107 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-801672" context rescaled to 1 replicas
	I0829 20:49:45.088827   82036 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0829 20:49:45.089000   82036 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:49:45.089050   82036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:49:45.105347   82036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I0829 20:49:45.105830   82036 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:49:45.106409   82036 main.go:141] libmachine: Using API Version  1
	I0829 20:49:45.106430   82036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:49:45.106796   82036 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:49:45.107047   82036 main.go:141] libmachine: (bridge-801672) Calling .GetMachineName
	I0829 20:49:45.107213   82036 main.go:141] libmachine: (bridge-801672) Calling .DriverName
	I0829 20:49:45.107406   82036 start.go:159] libmachine.API.Create for "bridge-801672" (driver="kvm2")
	I0829 20:49:45.107436   82036 client.go:168] LocalClient.Create starting
	I0829 20:49:45.107470   82036 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem
	I0829 20:49:45.107525   82036 main.go:141] libmachine: Decoding PEM data...
	I0829 20:49:45.107555   82036 main.go:141] libmachine: Parsing certificate...
	I0829 20:49:45.107621   82036 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem
	I0829 20:49:45.107647   82036 main.go:141] libmachine: Decoding PEM data...
	I0829 20:49:45.107666   82036 main.go:141] libmachine: Parsing certificate...
	I0829 20:49:45.107692   82036 main.go:141] libmachine: Running pre-create checks...
	I0829 20:49:45.107703   82036 main.go:141] libmachine: (bridge-801672) Calling .PreCreateCheck
	I0829 20:49:45.108096   82036 main.go:141] libmachine: (bridge-801672) Calling .GetConfigRaw
	I0829 20:49:45.108555   82036 main.go:141] libmachine: Creating machine...
	I0829 20:49:45.108572   82036 main.go:141] libmachine: (bridge-801672) Calling .Create
	I0829 20:49:45.108716   82036 main.go:141] libmachine: (bridge-801672) Creating KVM machine...
	I0829 20:49:45.109990   82036 main.go:141] libmachine: (bridge-801672) DBG | found existing default KVM network
	I0829 20:49:45.111477   82036 main.go:141] libmachine: (bridge-801672) DBG | I0829 20:49:45.111321   82089 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012dfd0}
	I0829 20:49:45.111519   82036 main.go:141] libmachine: (bridge-801672) DBG | created network xml: 
	I0829 20:49:45.111542   82036 main.go:141] libmachine: (bridge-801672) DBG | <network>
	I0829 20:49:45.111555   82036 main.go:141] libmachine: (bridge-801672) DBG |   <name>mk-bridge-801672</name>
	I0829 20:49:45.111572   82036 main.go:141] libmachine: (bridge-801672) DBG |   <dns enable='no'/>
	I0829 20:49:45.111581   82036 main.go:141] libmachine: (bridge-801672) DBG |   
	I0829 20:49:45.111590   82036 main.go:141] libmachine: (bridge-801672) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0829 20:49:45.111599   82036 main.go:141] libmachine: (bridge-801672) DBG |     <dhcp>
	I0829 20:49:45.111612   82036 main.go:141] libmachine: (bridge-801672) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0829 20:49:45.111623   82036 main.go:141] libmachine: (bridge-801672) DBG |     </dhcp>
	I0829 20:49:45.111634   82036 main.go:141] libmachine: (bridge-801672) DBG |   </ip>
	I0829 20:49:45.111647   82036 main.go:141] libmachine: (bridge-801672) DBG |   
	I0829 20:49:45.111655   82036 main.go:141] libmachine: (bridge-801672) DBG | </network>
	I0829 20:49:45.111666   82036 main.go:141] libmachine: (bridge-801672) DBG | 
	I0829 20:49:45.117109   82036 main.go:141] libmachine: (bridge-801672) DBG | trying to create private KVM network mk-bridge-801672 192.168.39.0/24...
	I0829 20:49:45.199165   82036 main.go:141] libmachine: (bridge-801672) Setting up store path in /home/jenkins/minikube-integration/19530-11185/.minikube/machines/bridge-801672 ...
	I0829 20:49:45.199197   82036 main.go:141] libmachine: (bridge-801672) Building disk image from file:///home/jenkins/minikube-integration/19530-11185/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso
	I0829 20:49:45.199223   82036 main.go:141] libmachine: (bridge-801672) DBG | private KVM network mk-bridge-801672 192.168.39.0/24 created
	I0829 20:49:45.199240   82036 main.go:141] libmachine: (bridge-801672) DBG | I0829 20:49:45.199111   82089 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 20:49:45.199269   82036 main.go:141] libmachine: (bridge-801672) Downloading /home/jenkins/minikube-integration/19530-11185/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19530-11185/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso...
	I0829 20:49:45.460790   82036 main.go:141] libmachine: (bridge-801672) DBG | I0829 20:49:45.460656   82089 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/bridge-801672/id_rsa...
	I0829 20:49:45.583678   82036 main.go:141] libmachine: (bridge-801672) DBG | I0829 20:49:45.583538   82089 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/bridge-801672/bridge-801672.rawdisk...
	I0829 20:49:45.583729   82036 main.go:141] libmachine: (bridge-801672) DBG | Writing magic tar header
	I0829 20:49:45.583745   82036 main.go:141] libmachine: (bridge-801672) DBG | Writing SSH key tar header
	I0829 20:49:45.583758   82036 main.go:141] libmachine: (bridge-801672) DBG | I0829 20:49:45.583660   82089 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19530-11185/.minikube/machines/bridge-801672 ...
	I0829 20:49:45.583773   82036 main.go:141] libmachine: (bridge-801672) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/bridge-801672
	I0829 20:49:45.583793   82036 main.go:141] libmachine: (bridge-801672) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube/machines
	I0829 20:49:45.583809   82036 main.go:141] libmachine: (bridge-801672) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube/machines/bridge-801672 (perms=drwx------)
	I0829 20:49:45.583823   82036 main.go:141] libmachine: (bridge-801672) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 20:49:45.583844   82036 main.go:141] libmachine: (bridge-801672) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19530-11185
	I0829 20:49:45.583856   82036 main.go:141] libmachine: (bridge-801672) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0829 20:49:45.583871   82036 main.go:141] libmachine: (bridge-801672) DBG | Checking permissions on dir: /home/jenkins
	I0829 20:49:45.583882   82036 main.go:141] libmachine: (bridge-801672) DBG | Checking permissions on dir: /home
	I0829 20:49:45.583892   82036 main.go:141] libmachine: (bridge-801672) DBG | Skipping /home - not owner
	I0829 20:49:45.583906   82036 main.go:141] libmachine: (bridge-801672) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube/machines (perms=drwxr-xr-x)
	I0829 20:49:45.583921   82036 main.go:141] libmachine: (bridge-801672) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185/.minikube (perms=drwxr-xr-x)
	I0829 20:49:45.583934   82036 main.go:141] libmachine: (bridge-801672) Setting executable bit set on /home/jenkins/minikube-integration/19530-11185 (perms=drwxrwxr-x)
	I0829 20:49:45.583947   82036 main.go:141] libmachine: (bridge-801672) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0829 20:49:45.583955   82036 main.go:141] libmachine: (bridge-801672) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0829 20:49:45.583966   82036 main.go:141] libmachine: (bridge-801672) Creating domain...
	I0829 20:49:45.585156   82036 main.go:141] libmachine: (bridge-801672) define libvirt domain using xml: 
	I0829 20:49:45.585185   82036 main.go:141] libmachine: (bridge-801672) <domain type='kvm'>
	I0829 20:49:45.585197   82036 main.go:141] libmachine: (bridge-801672)   <name>bridge-801672</name>
	I0829 20:49:45.585209   82036 main.go:141] libmachine: (bridge-801672)   <memory unit='MiB'>3072</memory>
	I0829 20:49:45.585219   82036 main.go:141] libmachine: (bridge-801672)   <vcpu>2</vcpu>
	I0829 20:49:45.585226   82036 main.go:141] libmachine: (bridge-801672)   <features>
	I0829 20:49:45.585232   82036 main.go:141] libmachine: (bridge-801672)     <acpi/>
	I0829 20:49:45.585237   82036 main.go:141] libmachine: (bridge-801672)     <apic/>
	I0829 20:49:45.585242   82036 main.go:141] libmachine: (bridge-801672)     <pae/>
	I0829 20:49:45.585252   82036 main.go:141] libmachine: (bridge-801672)     
	I0829 20:49:45.585261   82036 main.go:141] libmachine: (bridge-801672)   </features>
	I0829 20:49:45.585266   82036 main.go:141] libmachine: (bridge-801672)   <cpu mode='host-passthrough'>
	I0829 20:49:45.585271   82036 main.go:141] libmachine: (bridge-801672)   
	I0829 20:49:45.585279   82036 main.go:141] libmachine: (bridge-801672)   </cpu>
	I0829 20:49:45.585284   82036 main.go:141] libmachine: (bridge-801672)   <os>
	I0829 20:49:45.585292   82036 main.go:141] libmachine: (bridge-801672)     <type>hvm</type>
	I0829 20:49:45.585323   82036 main.go:141] libmachine: (bridge-801672)     <boot dev='cdrom'/>
	I0829 20:49:45.585349   82036 main.go:141] libmachine: (bridge-801672)     <boot dev='hd'/>
	I0829 20:49:45.585364   82036 main.go:141] libmachine: (bridge-801672)     <bootmenu enable='no'/>
	I0829 20:49:45.585378   82036 main.go:141] libmachine: (bridge-801672)   </os>
	I0829 20:49:45.585390   82036 main.go:141] libmachine: (bridge-801672)   <devices>
	I0829 20:49:45.585401   82036 main.go:141] libmachine: (bridge-801672)     <disk type='file' device='cdrom'>
	I0829 20:49:45.585417   82036 main.go:141] libmachine: (bridge-801672)       <source file='/home/jenkins/minikube-integration/19530-11185/.minikube/machines/bridge-801672/boot2docker.iso'/>
	I0829 20:49:45.585431   82036 main.go:141] libmachine: (bridge-801672)       <target dev='hdc' bus='scsi'/>
	I0829 20:49:45.585451   82036 main.go:141] libmachine: (bridge-801672)       <readonly/>
	I0829 20:49:45.585470   82036 main.go:141] libmachine: (bridge-801672)     </disk>
	I0829 20:49:45.585485   82036 main.go:141] libmachine: (bridge-801672)     <disk type='file' device='disk'>
	I0829 20:49:45.585502   82036 main.go:141] libmachine: (bridge-801672)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0829 20:49:45.585520   82036 main.go:141] libmachine: (bridge-801672)       <source file='/home/jenkins/minikube-integration/19530-11185/.minikube/machines/bridge-801672/bridge-801672.rawdisk'/>
	I0829 20:49:45.585536   82036 main.go:141] libmachine: (bridge-801672)       <target dev='hda' bus='virtio'/>
	I0829 20:49:45.585548   82036 main.go:141] libmachine: (bridge-801672)     </disk>
	I0829 20:49:45.585559   82036 main.go:141] libmachine: (bridge-801672)     <interface type='network'>
	I0829 20:49:45.585568   82036 main.go:141] libmachine: (bridge-801672)       <source network='mk-bridge-801672'/>
	I0829 20:49:45.585576   82036 main.go:141] libmachine: (bridge-801672)       <model type='virtio'/>
	I0829 20:49:45.585587   82036 main.go:141] libmachine: (bridge-801672)     </interface>
	I0829 20:49:45.585599   82036 main.go:141] libmachine: (bridge-801672)     <interface type='network'>
	I0829 20:49:45.585610   82036 main.go:141] libmachine: (bridge-801672)       <source network='default'/>
	I0829 20:49:45.585620   82036 main.go:141] libmachine: (bridge-801672)       <model type='virtio'/>
	I0829 20:49:45.585631   82036 main.go:141] libmachine: (bridge-801672)     </interface>
	I0829 20:49:45.585644   82036 main.go:141] libmachine: (bridge-801672)     <serial type='pty'>
	I0829 20:49:45.585659   82036 main.go:141] libmachine: (bridge-801672)       <target port='0'/>
	I0829 20:49:45.585680   82036 main.go:141] libmachine: (bridge-801672)     </serial>
	I0829 20:49:45.585691   82036 main.go:141] libmachine: (bridge-801672)     <console type='pty'>
	I0829 20:49:45.585708   82036 main.go:141] libmachine: (bridge-801672)       <target type='serial' port='0'/>
	I0829 20:49:45.585736   82036 main.go:141] libmachine: (bridge-801672)     </console>
	I0829 20:49:45.585761   82036 main.go:141] libmachine: (bridge-801672)     <rng model='virtio'>
	I0829 20:49:45.585790   82036 main.go:141] libmachine: (bridge-801672)       <backend model='random'>/dev/random</backend>
	I0829 20:49:45.585810   82036 main.go:141] libmachine: (bridge-801672)     </rng>
	I0829 20:49:45.585822   82036 main.go:141] libmachine: (bridge-801672)     
	I0829 20:49:45.585842   82036 main.go:141] libmachine: (bridge-801672)     
	I0829 20:49:45.585860   82036 main.go:141] libmachine: (bridge-801672)   </devices>
	I0829 20:49:45.585874   82036 main.go:141] libmachine: (bridge-801672) </domain>
	I0829 20:49:45.585903   82036 main.go:141] libmachine: (bridge-801672) 
	I0829 20:49:45.589846   82036 main.go:141] libmachine: (bridge-801672) DBG | domain bridge-801672 has defined MAC address 52:54:00:07:dd:3b in network default
	I0829 20:49:45.590624   82036 main.go:141] libmachine: (bridge-801672) Ensuring networks are active...
	I0829 20:49:45.590646   82036 main.go:141] libmachine: (bridge-801672) DBG | domain bridge-801672 has defined MAC address 52:54:00:01:31:c2 in network mk-bridge-801672
	I0829 20:49:45.591354   82036 main.go:141] libmachine: (bridge-801672) Ensuring network default is active
	I0829 20:49:45.591757   82036 main.go:141] libmachine: (bridge-801672) Ensuring network mk-bridge-801672 is active
	I0829 20:49:45.592438   82036 main.go:141] libmachine: (bridge-801672) Getting domain xml...
	I0829 20:49:45.593220   82036 main.go:141] libmachine: (bridge-801672) Creating domain...
	I0829 20:49:46.879908   82036 main.go:141] libmachine: (bridge-801672) Waiting to get IP...
	I0829 20:49:46.880770   82036 main.go:141] libmachine: (bridge-801672) DBG | domain bridge-801672 has defined MAC address 52:54:00:01:31:c2 in network mk-bridge-801672
	I0829 20:49:46.881264   82036 main.go:141] libmachine: (bridge-801672) DBG | unable to find current IP address of domain bridge-801672 in network mk-bridge-801672
	I0829 20:49:46.881292   82036 main.go:141] libmachine: (bridge-801672) DBG | I0829 20:49:46.881224   82089 retry.go:31] will retry after 261.813601ms: waiting for machine to come up
	I0829 20:49:47.144747   82036 main.go:141] libmachine: (bridge-801672) DBG | domain bridge-801672 has defined MAC address 52:54:00:01:31:c2 in network mk-bridge-801672
	I0829 20:49:47.145282   82036 main.go:141] libmachine: (bridge-801672) DBG | unable to find current IP address of domain bridge-801672 in network mk-bridge-801672
	I0829 20:49:47.145311   82036 main.go:141] libmachine: (bridge-801672) DBG | I0829 20:49:47.145236   82089 retry.go:31] will retry after 242.826358ms: waiting for machine to come up
	I0829 20:49:47.389742   82036 main.go:141] libmachine: (bridge-801672) DBG | domain bridge-801672 has defined MAC address 52:54:00:01:31:c2 in network mk-bridge-801672
	I0829 20:49:47.390277   82036 main.go:141] libmachine: (bridge-801672) DBG | unable to find current IP address of domain bridge-801672 in network mk-bridge-801672
	I0829 20:49:47.390326   82036 main.go:141] libmachine: (bridge-801672) DBG | I0829 20:49:47.390246   82089 retry.go:31] will retry after 385.602371ms: waiting for machine to come up
	I0829 20:49:47.779235   82036 main.go:141] libmachine: (bridge-801672) DBG | domain bridge-801672 has defined MAC address 52:54:00:01:31:c2 in network mk-bridge-801672
	I0829 20:49:47.779768   82036 main.go:141] libmachine: (bridge-801672) DBG | unable to find current IP address of domain bridge-801672 in network mk-bridge-801672
	I0829 20:49:47.779793   82036 main.go:141] libmachine: (bridge-801672) DBG | I0829 20:49:47.779738   82089 retry.go:31] will retry after 506.094904ms: waiting for machine to come up
	I0829 20:49:48.287226   82036 main.go:141] libmachine: (bridge-801672) DBG | domain bridge-801672 has defined MAC address 52:54:00:01:31:c2 in network mk-bridge-801672
	I0829 20:49:48.287707   82036 main.go:141] libmachine: (bridge-801672) DBG | unable to find current IP address of domain bridge-801672 in network mk-bridge-801672
	I0829 20:49:48.287737   82036 main.go:141] libmachine: (bridge-801672) DBG | I0829 20:49:48.287661   82089 retry.go:31] will retry after 493.308047ms: waiting for machine to come up
	I0829 20:49:48.782141   82036 main.go:141] libmachine: (bridge-801672) DBG | domain bridge-801672 has defined MAC address 52:54:00:01:31:c2 in network mk-bridge-801672
	I0829 20:49:48.782558   82036 main.go:141] libmachine: (bridge-801672) DBG | unable to find current IP address of domain bridge-801672 in network mk-bridge-801672
	I0829 20:49:48.782591   82036 main.go:141] libmachine: (bridge-801672) DBG | I0829 20:49:48.782501   82089 retry.go:31] will retry after 744.538946ms: waiting for machine to come up
	I0829 20:49:49.528525   82036 main.go:141] libmachine: (bridge-801672) DBG | domain bridge-801672 has defined MAC address 52:54:00:01:31:c2 in network mk-bridge-801672
	I0829 20:49:49.529123   82036 main.go:141] libmachine: (bridge-801672) DBG | unable to find current IP address of domain bridge-801672 in network mk-bridge-801672
	I0829 20:49:49.529275   82036 main.go:141] libmachine: (bridge-801672) DBG | I0829 20:49:49.529193   82089 retry.go:31] will retry after 844.407361ms: waiting for machine to come up
	I0829 20:49:48.008696   80107 node_ready.go:53] node "flannel-801672" has status "Ready":"False"
	I0829 20:49:50.009640   80107 node_ready.go:53] node "flannel-801672" has status "Ready":"False"
	I0829 20:49:50.374715   82036 main.go:141] libmachine: (bridge-801672) DBG | domain bridge-801672 has defined MAC address 52:54:00:01:31:c2 in network mk-bridge-801672
	I0829 20:49:50.375220   82036 main.go:141] libmachine: (bridge-801672) DBG | unable to find current IP address of domain bridge-801672 in network mk-bridge-801672
	I0829 20:49:50.375251   82036 main.go:141] libmachine: (bridge-801672) DBG | I0829 20:49:50.375175   82089 retry.go:31] will retry after 1.462078443s: waiting for machine to come up
	I0829 20:49:51.839672   82036 main.go:141] libmachine: (bridge-801672) DBG | domain bridge-801672 has defined MAC address 52:54:00:01:31:c2 in network mk-bridge-801672
	I0829 20:49:51.840217   82036 main.go:141] libmachine: (bridge-801672) DBG | unable to find current IP address of domain bridge-801672 in network mk-bridge-801672
	I0829 20:49:51.840240   82036 main.go:141] libmachine: (bridge-801672) DBG | I0829 20:49:51.840182   82089 retry.go:31] will retry after 1.174851547s: waiting for machine to come up
	I0829 20:49:53.017003   82036 main.go:141] libmachine: (bridge-801672) DBG | domain bridge-801672 has defined MAC address 52:54:00:01:31:c2 in network mk-bridge-801672
	I0829 20:49:53.017625   82036 main.go:141] libmachine: (bridge-801672) DBG | unable to find current IP address of domain bridge-801672 in network mk-bridge-801672
	I0829 20:49:53.017720   82036 main.go:141] libmachine: (bridge-801672) DBG | I0829 20:49:53.017670   82089 retry.go:31] will retry after 1.65861744s: waiting for machine to come up
	I0829 20:49:54.678507   82036 main.go:141] libmachine: (bridge-801672) DBG | domain bridge-801672 has defined MAC address 52:54:00:01:31:c2 in network mk-bridge-801672
	I0829 20:49:54.679051   82036 main.go:141] libmachine: (bridge-801672) DBG | unable to find current IP address of domain bridge-801672 in network mk-bridge-801672
	I0829 20:49:54.679095   82036 main.go:141] libmachine: (bridge-801672) DBG | I0829 20:49:54.679018   82089 retry.go:31] will retry after 2.763082498s: waiting for machine to come up
	I0829 20:49:52.010170   80107 node_ready.go:49] node "flannel-801672" has status "Ready":"True"
	I0829 20:49:52.010205   80107 node_ready.go:38] duration metric: took 6.004863534s for node "flannel-801672" to be "Ready" ...
	I0829 20:49:52.010218   80107 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:49:52.017356   80107 pod_ready.go:79] waiting up to 15m0s for pod "coredns-6f6b679f8f-k542k" in "kube-system" namespace to be "Ready" ...
	I0829 20:49:54.023979   80107 pod_ready.go:103] pod "coredns-6f6b679f8f-k542k" in "kube-system" namespace has status "Ready":"False"
	I0829 20:49:56.024885   80107 pod_ready.go:103] pod "coredns-6f6b679f8f-k542k" in "kube-system" namespace has status "Ready":"False"
	I0829 20:49:57.443755   82036 main.go:141] libmachine: (bridge-801672) DBG | domain bridge-801672 has defined MAC address 52:54:00:01:31:c2 in network mk-bridge-801672
	I0829 20:49:57.444229   82036 main.go:141] libmachine: (bridge-801672) DBG | unable to find current IP address of domain bridge-801672 in network mk-bridge-801672
	I0829 20:49:57.444256   82036 main.go:141] libmachine: (bridge-801672) DBG | I0829 20:49:57.444202   82089 retry.go:31] will retry after 3.258040577s: waiting for machine to come up
	
	
	==> CRI-O <==
	Aug 29 20:50:01 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:50:01.488017521Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5d688e3e-5999-45e6-a221-1eb4ba4961ce name=/runtime.v1.RuntimeService/Version
	Aug 29 20:50:01 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:50:01.488870191Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bc32bb57-c4c8-406d-ae72-438a13b2c99d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:50:01 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:50:01.489321660Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964601489297663,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc32bb57-c4c8-406d-ae72-438a13b2c99d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:50:01 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:50:01.489879514Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ffb84306-1d93-4235-a6c9-225b1cdeb70c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:50:01 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:50:01.489970184Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ffb84306-1d93-4235-a6c9-225b1cdeb70c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:50:01 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:50:01.490187676Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e74650e815d0ebb9e571fffeb67d5daf0eecc3b9277d002bf215d8c23e746ce1,PodSandboxId:6314bd63c8faffd7a2132769f0b5566b225309eac204c502496a1d9009058d71,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724963508150201334,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81531989-d045-44fb-b1a1-0817af27c804,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e00316ba74bfecb01e600a5b225e97d007f7e808c279766683e5ffc0d89b5b7,PodSandboxId:cf42cf6b2bc99285118aedd1a788d3985775a28b6e61ea8ca14ccd3e32ae3f03,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963507706111777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-l25kd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86947930-0d47-407a-b876-b482596fbe8f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b14158df556054b9512a278737e089135111eb66e6c7704568db076062574121,PodSandboxId:0735c91e139826b75f188c2b1ee3d528c8d08871ecd4074253ef8afe27cc6394,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963507555614630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lnm92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a6caefe0-e883-4460-87de-25ee97191e1a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb536f9758a829bd1712db0f4afcb55637f0ae9c60271ae7fd453ef123c2f3d8,PodSandboxId:3d30ef69309a1781dd6ecf6e58ecf1a01f73e66ad2340217612d1bc2541cfacb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724963507022111607,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ptswc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96c01414-e8e8-4731-824b-11d636285fb3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73a7bd61a6fe654d4ad5c149a10789b03edc6d49d5d95bef662753f186c0f929,PodSandboxId:c6c9318f8ce085f432f5cf94524fe98fcddfbd1c738bf51adc0515e55053320b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724963495960510210
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5e67ab7070b5ee816dfb9f010341b41,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73b74ec8a00731f45de32583d0f603e164ce0d29fc981ba9d8539c1c794612a0,PodSandboxId:0d7a1fcbe06bde122d266507f385676279485ca5e151bf683e3aadf5f916a152,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724963495992555346,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5832a6d2361afe00d8adcf51f306780e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882e84e9fa32f87b2b6ddae42319c25903c8398224a894c8499553878bc782ab,PodSandboxId:397f31c2e89b9c9daf0dad789a94e7007ae4b3e643978e4882a785794fe07f12,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:17249634959229
98565,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a534bbc2142697d334cc8b549bf3b1f2,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fe0c33110958bd07c8bba63fecb131e682266c5d51683606fc412ffa9e2be04,PodSandboxId:3ca860b0d95ccc4fe54c1384cbfbf5d044111672c87d26d1347e8deae4a19820,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17249634
95853956905,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f880e143b217d3e5f7e4426cfaeb999,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da6d34394831076ef7f414268020afd8668b079b4c58634f4ff73b97a538b7c4,PodSandboxId:2c773eb8560fd46c5f4c95aa7ad228b7d284855a0831a838a8579814e2c31766,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724963208675808831,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5832a6d2361afe00d8adcf51f306780e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ffb84306-1d93-4235-a6c9-225b1cdeb70c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:50:01 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:50:01.526008603Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cbe12fb2-d08e-4941-b2a1-c392b7d5b7d0 name=/runtime.v1.RuntimeService/Version
	Aug 29 20:50:01 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:50:01.526107178Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cbe12fb2-d08e-4941-b2a1-c392b7d5b7d0 name=/runtime.v1.RuntimeService/Version
	Aug 29 20:50:01 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:50:01.527052128Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f0b41916-c78e-484d-afb1-334f8fcb28f6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:50:01 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:50:01.527432832Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964601527414192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f0b41916-c78e-484d-afb1-334f8fcb28f6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:50:01 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:50:01.527920986Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a7ffb45-8637-46d3-8976-4f64f598caed name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:50:01 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:50:01.527972256Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a7ffb45-8637-46d3-8976-4f64f598caed name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:50:01 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:50:01.528176719Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e74650e815d0ebb9e571fffeb67d5daf0eecc3b9277d002bf215d8c23e746ce1,PodSandboxId:6314bd63c8faffd7a2132769f0b5566b225309eac204c502496a1d9009058d71,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724963508150201334,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81531989-d045-44fb-b1a1-0817af27c804,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e00316ba74bfecb01e600a5b225e97d007f7e808c279766683e5ffc0d89b5b7,PodSandboxId:cf42cf6b2bc99285118aedd1a788d3985775a28b6e61ea8ca14ccd3e32ae3f03,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963507706111777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-l25kd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86947930-0d47-407a-b876-b482596fbe8f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b14158df556054b9512a278737e089135111eb66e6c7704568db076062574121,PodSandboxId:0735c91e139826b75f188c2b1ee3d528c8d08871ecd4074253ef8afe27cc6394,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963507555614630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lnm92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a6caefe0-e883-4460-87de-25ee97191e1a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb536f9758a829bd1712db0f4afcb55637f0ae9c60271ae7fd453ef123c2f3d8,PodSandboxId:3d30ef69309a1781dd6ecf6e58ecf1a01f73e66ad2340217612d1bc2541cfacb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724963507022111607,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ptswc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96c01414-e8e8-4731-824b-11d636285fb3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73a7bd61a6fe654d4ad5c149a10789b03edc6d49d5d95bef662753f186c0f929,PodSandboxId:c6c9318f8ce085f432f5cf94524fe98fcddfbd1c738bf51adc0515e55053320b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724963495960510210
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5e67ab7070b5ee816dfb9f010341b41,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73b74ec8a00731f45de32583d0f603e164ce0d29fc981ba9d8539c1c794612a0,PodSandboxId:0d7a1fcbe06bde122d266507f385676279485ca5e151bf683e3aadf5f916a152,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724963495992555346,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5832a6d2361afe00d8adcf51f306780e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882e84e9fa32f87b2b6ddae42319c25903c8398224a894c8499553878bc782ab,PodSandboxId:397f31c2e89b9c9daf0dad789a94e7007ae4b3e643978e4882a785794fe07f12,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:17249634959229
98565,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a534bbc2142697d334cc8b549bf3b1f2,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fe0c33110958bd07c8bba63fecb131e682266c5d51683606fc412ffa9e2be04,PodSandboxId:3ca860b0d95ccc4fe54c1384cbfbf5d044111672c87d26d1347e8deae4a19820,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17249634
95853956905,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f880e143b217d3e5f7e4426cfaeb999,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da6d34394831076ef7f414268020afd8668b079b4c58634f4ff73b97a538b7c4,PodSandboxId:2c773eb8560fd46c5f4c95aa7ad228b7d284855a0831a838a8579814e2c31766,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724963208675808831,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5832a6d2361afe00d8adcf51f306780e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a7ffb45-8637-46d3-8976-4f64f598caed name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:50:01 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:50:01.539991173Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1d23896c-b0e3-467b-af14-85a86b1b2722 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 29 20:50:01 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:50:01.541988148Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3cf57383efef9a7aa6750b10fcd4405824f3545388b883f0d30e3b4ca0a31202,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-6sdqg,Uid:2c9efadb-89bb-4aa6-b0f0-ddcb3e931674,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724963507983520998,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-6sdqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c9efadb-89bb-4aa6-b0f0-ddcb3e931674,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T20:31:47.671993203Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6314bd63c8faffd7a2132769f0b5566b225309eac204c502496a1d9009058d71,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:81531989-d045-44fb-b1a1-0817
af27c804,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724963507571660502,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81531989-d045-44fb-b1a1-0817af27c804,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provision
er\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-29T20:31:47.258432253Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cf42cf6b2bc99285118aedd1a788d3985775a28b6e61ea8ca14ccd3e32ae3f03,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-l25kd,Uid:86947930-0d47-407a-b876-b482596fbe8f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724963506851820930,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-l25kd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86947930-0d47-407a-b876-b482596fbe8f,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T20:31:46.539245053Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0735c91e139826b75f188c2b1ee3d528c8d08871ecd4074253ef8afe27cc6394,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-lnm92,Uid:a6caefe0
-e883-4460-87de-25ee97191e1a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724963506835365046,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-lnm92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6caefe0-e883-4460-87de-25ee97191e1a,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T20:31:46.509201729Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3d30ef69309a1781dd6ecf6e58ecf1a01f73e66ad2340217612d1bc2541cfacb,Metadata:&PodSandboxMetadata{Name:kube-proxy-ptswc,Uid:96c01414-e8e8-4731-824b-11d636285fb3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724963506749066359,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-ptswc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96c01414-e8e8-4731-824b-11d636285fb3,k8s-app: kube-pro
xy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T20:31:46.434537597Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c6c9318f8ce085f432f5cf94524fe98fcddfbd1c738bf51adc0515e55053320b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-145096,Uid:a5e67ab7070b5ee816dfb9f010341b41,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724963495730153124,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5e67ab7070b5ee816dfb9f010341b41,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a5e67ab7070b5ee816dfb9f010341b41,kubernetes.io/config.seen: 2024-08-29T20:31:35.272800474Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0d7a1fcbe06bde122d266507f385676279485ca5e151bf683e3aadf5f916a152,Metadata:&PodSandb
oxMetadata{Name:kube-apiserver-default-k8s-diff-port-145096,Uid:5832a6d2361afe00d8adcf51f306780e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724963495729723579,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5832a6d2361afe00d8adcf51f306780e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.140:8444,kubernetes.io/config.hash: 5832a6d2361afe00d8adcf51f306780e,kubernetes.io/config.seen: 2024-08-29T20:31:35.272797970Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:397f31c2e89b9c9daf0dad789a94e7007ae4b3e643978e4882a785794fe07f12,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-145096,Uid:a534bbc2142697d334cc8b549bf3b1f2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724963495724138
331,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a534bbc2142697d334cc8b549bf3b1f2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a534bbc2142697d334cc8b549bf3b1f2,kubernetes.io/config.seen: 2024-08-29T20:31:35.272799372Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3ca860b0d95ccc4fe54c1384cbfbf5d044111672c87d26d1347e8deae4a19820,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-145096,Uid:2f880e143b217d3e5f7e4426cfaeb999,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724963495701246408,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f880e143b217d3e5f7e4426cfaeb999,tier: control-plane,},Annotat
ions:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.140:2379,kubernetes.io/config.hash: 2f880e143b217d3e5f7e4426cfaeb999,kubernetes.io/config.seen: 2024-08-29T20:31:35.272794093Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=1d23896c-b0e3-467b-af14-85a86b1b2722 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 29 20:50:01 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:50:01.543184610Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e051b7f2-d6e8-47b1-bb25-36def1bf7d51 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:50:01 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:50:01.543250775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e051b7f2-d6e8-47b1-bb25-36def1bf7d51 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:50:01 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:50:01.543438633Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e74650e815d0ebb9e571fffeb67d5daf0eecc3b9277d002bf215d8c23e746ce1,PodSandboxId:6314bd63c8faffd7a2132769f0b5566b225309eac204c502496a1d9009058d71,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724963508150201334,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81531989-d045-44fb-b1a1-0817af27c804,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e00316ba74bfecb01e600a5b225e97d007f7e808c279766683e5ffc0d89b5b7,PodSandboxId:cf42cf6b2bc99285118aedd1a788d3985775a28b6e61ea8ca14ccd3e32ae3f03,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963507706111777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-l25kd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86947930-0d47-407a-b876-b482596fbe8f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b14158df556054b9512a278737e089135111eb66e6c7704568db076062574121,PodSandboxId:0735c91e139826b75f188c2b1ee3d528c8d08871ecd4074253ef8afe27cc6394,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963507555614630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lnm92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a6caefe0-e883-4460-87de-25ee97191e1a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb536f9758a829bd1712db0f4afcb55637f0ae9c60271ae7fd453ef123c2f3d8,PodSandboxId:3d30ef69309a1781dd6ecf6e58ecf1a01f73e66ad2340217612d1bc2541cfacb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724963507022111607,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ptswc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96c01414-e8e8-4731-824b-11d636285fb3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73a7bd61a6fe654d4ad5c149a10789b03edc6d49d5d95bef662753f186c0f929,PodSandboxId:c6c9318f8ce085f432f5cf94524fe98fcddfbd1c738bf51adc0515e55053320b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724963495960510210
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5e67ab7070b5ee816dfb9f010341b41,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73b74ec8a00731f45de32583d0f603e164ce0d29fc981ba9d8539c1c794612a0,PodSandboxId:0d7a1fcbe06bde122d266507f385676279485ca5e151bf683e3aadf5f916a152,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724963495992555346,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5832a6d2361afe00d8adcf51f306780e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882e84e9fa32f87b2b6ddae42319c25903c8398224a894c8499553878bc782ab,PodSandboxId:397f31c2e89b9c9daf0dad789a94e7007ae4b3e643978e4882a785794fe07f12,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:17249634959229
98565,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a534bbc2142697d334cc8b549bf3b1f2,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fe0c33110958bd07c8bba63fecb131e682266c5d51683606fc412ffa9e2be04,PodSandboxId:3ca860b0d95ccc4fe54c1384cbfbf5d044111672c87d26d1347e8deae4a19820,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17249634
95853956905,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f880e143b217d3e5f7e4426cfaeb999,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e051b7f2-d6e8-47b1-bb25-36def1bf7d51 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:50:01 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:50:01.563395406Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eab5a4c8-4a49-44c7-a15f-1303329e9c31 name=/runtime.v1.RuntimeService/Version
	Aug 29 20:50:01 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:50:01.563487101Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eab5a4c8-4a49-44c7-a15f-1303329e9c31 name=/runtime.v1.RuntimeService/Version
	Aug 29 20:50:01 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:50:01.564820040Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b0cacbeb-f481-4f2e-819b-43a39710a66f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:50:01 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:50:01.565275678Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964601565251323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b0cacbeb-f481-4f2e-819b-43a39710a66f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:50:01 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:50:01.565787338Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=837afcca-6c4b-42ad-98ff-7c431eae2307 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:50:01 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:50:01.565864157Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=837afcca-6c4b-42ad-98ff-7c431eae2307 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:50:01 default-k8s-diff-port-145096 crio[713]: time="2024-08-29 20:50:01.566047104Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e74650e815d0ebb9e571fffeb67d5daf0eecc3b9277d002bf215d8c23e746ce1,PodSandboxId:6314bd63c8faffd7a2132769f0b5566b225309eac204c502496a1d9009058d71,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724963508150201334,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81531989-d045-44fb-b1a1-0817af27c804,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e00316ba74bfecb01e600a5b225e97d007f7e808c279766683e5ffc0d89b5b7,PodSandboxId:cf42cf6b2bc99285118aedd1a788d3985775a28b6e61ea8ca14ccd3e32ae3f03,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963507706111777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-l25kd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86947930-0d47-407a-b876-b482596fbe8f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b14158df556054b9512a278737e089135111eb66e6c7704568db076062574121,PodSandboxId:0735c91e139826b75f188c2b1ee3d528c8d08871ecd4074253ef8afe27cc6394,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963507555614630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lnm92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a6caefe0-e883-4460-87de-25ee97191e1a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb536f9758a829bd1712db0f4afcb55637f0ae9c60271ae7fd453ef123c2f3d8,PodSandboxId:3d30ef69309a1781dd6ecf6e58ecf1a01f73e66ad2340217612d1bc2541cfacb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724963507022111607,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ptswc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96c01414-e8e8-4731-824b-11d636285fb3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73a7bd61a6fe654d4ad5c149a10789b03edc6d49d5d95bef662753f186c0f929,PodSandboxId:c6c9318f8ce085f432f5cf94524fe98fcddfbd1c738bf51adc0515e55053320b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724963495960510210
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5e67ab7070b5ee816dfb9f010341b41,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73b74ec8a00731f45de32583d0f603e164ce0d29fc981ba9d8539c1c794612a0,PodSandboxId:0d7a1fcbe06bde122d266507f385676279485ca5e151bf683e3aadf5f916a152,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724963495992555346,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5832a6d2361afe00d8adcf51f306780e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882e84e9fa32f87b2b6ddae42319c25903c8398224a894c8499553878bc782ab,PodSandboxId:397f31c2e89b9c9daf0dad789a94e7007ae4b3e643978e4882a785794fe07f12,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:17249634959229
98565,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a534bbc2142697d334cc8b549bf3b1f2,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fe0c33110958bd07c8bba63fecb131e682266c5d51683606fc412ffa9e2be04,PodSandboxId:3ca860b0d95ccc4fe54c1384cbfbf5d044111672c87d26d1347e8deae4a19820,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17249634
95853956905,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f880e143b217d3e5f7e4426cfaeb999,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da6d34394831076ef7f414268020afd8668b079b4c58634f4ff73b97a538b7c4,PodSandboxId:2c773eb8560fd46c5f4c95aa7ad228b7d284855a0831a838a8579814e2c31766,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724963208675808831,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-145096,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5832a6d2361afe00d8adcf51f306780e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=837afcca-6c4b-42ad-98ff-7c431eae2307 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e74650e815d0e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner       0                   6314bd63c8faf       storage-provisioner
	9e00316ba74bf       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 minutes ago      Running             coredns                   0                   cf42cf6b2bc99       coredns-6f6b679f8f-l25kd
	b14158df55605       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 minutes ago      Running             coredns                   0                   0735c91e13982       coredns-6f6b679f8f-lnm92
	fb536f9758a82       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   18 minutes ago      Running             kube-proxy                0                   3d30ef69309a1       kube-proxy-ptswc
	73b74ec8a0073       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   18 minutes ago      Running             kube-apiserver            2                   0d7a1fcbe06bd       kube-apiserver-default-k8s-diff-port-145096
	73a7bd61a6fe6       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   18 minutes ago      Running             kube-scheduler            2                   c6c9318f8ce08       kube-scheduler-default-k8s-diff-port-145096
	882e84e9fa32f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   18 minutes ago      Running             kube-controller-manager   2                   397f31c2e89b9       kube-controller-manager-default-k8s-diff-port-145096
	0fe0c33110958       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   18 minutes ago      Running             etcd                      2                   3ca860b0d95cc       etcd-default-k8s-diff-port-145096
	da6d343948310       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   23 minutes ago      Exited              kube-apiserver            1                   2c773eb8560fd       kube-apiserver-default-k8s-diff-port-145096
	
	
	==> coredns [9e00316ba74bfecb01e600a5b225e97d007f7e808c279766683e5ffc0d89b5b7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [b14158df556054b9512a278737e089135111eb66e6c7704568db076062574121] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-145096
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-145096
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=default-k8s-diff-port-145096
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T20_31_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 20:31:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-145096
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 20:49:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 20:47:09 +0000   Thu, 29 Aug 2024 20:31:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 20:47:09 +0000   Thu, 29 Aug 2024 20:31:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 20:47:09 +0000   Thu, 29 Aug 2024 20:31:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 20:47:09 +0000   Thu, 29 Aug 2024 20:31:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.140
	  Hostname:    default-k8s-diff-port-145096
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2afa83c73ada46a2971bb4d5d93e2336
	  System UUID:                2afa83c7-3ada-46a2-971b-b4d5d93e2336
	  Boot ID:                    f8846589-9d1a-4563-949d-ad4a4ac61d53
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-l25kd                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     18m
	  kube-system                 coredns-6f6b679f8f-lnm92                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     18m
	  kube-system                 etcd-default-k8s-diff-port-145096                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         18m
	  kube-system                 kube-apiserver-default-k8s-diff-port-145096             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-145096    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-ptswc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-default-k8s-diff-port-145096             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 metrics-server-6867b74b74-6sdqg                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         18m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 18m   kube-proxy       
	  Normal  Starting                 18m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m   kubelet          Node default-k8s-diff-port-145096 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m   kubelet          Node default-k8s-diff-port-145096 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m   kubelet          Node default-k8s-diff-port-145096 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m   node-controller  Node default-k8s-diff-port-145096 event: Registered Node default-k8s-diff-port-145096 in Controller
	
	
	==> dmesg <==
	[  +0.060252] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.050563] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.098792] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.463652] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.560476] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.055091] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.063289] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059637] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.227807] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.141345] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.310908] systemd-fstab-generator[704]: Ignoring "noauto" option for root device
	[  +4.284938] systemd-fstab-generator[794]: Ignoring "noauto" option for root device
	[  +0.067478] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.874590] systemd-fstab-generator[916]: Ignoring "noauto" option for root device
	[  +4.683226] kauditd_printk_skb: 97 callbacks suppressed
	[Aug29 20:27] kauditd_printk_skb: 90 callbacks suppressed
	[Aug29 20:31] systemd-fstab-generator[2539]: Ignoring "noauto" option for root device
	[  +0.067514] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.497110] systemd-fstab-generator[2863]: Ignoring "noauto" option for root device
	[  +0.090496] kauditd_printk_skb: 54 callbacks suppressed
	[  +4.793672] systemd-fstab-generator[2976]: Ignoring "noauto" option for root device
	[  +0.731391] kauditd_printk_skb: 34 callbacks suppressed
	[  +9.265117] kauditd_printk_skb: 62 callbacks suppressed
	
	
	==> etcd [0fe0c33110958bd07c8bba63fecb131e682266c5d51683606fc412ffa9e2be04] <==
	{"level":"info","ts":"2024-08-29T20:31:36.723336Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T20:31:36.737863Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-29T20:31:36.729652Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3501d2cdd2f1863a","local-member-id":"bc75878aaf44c549","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T20:31:36.730272Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T20:31:36.740254Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-29T20:31:36.768752Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-29T20:31:36.768907Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T20:31:36.768956Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T20:31:36.776750Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.140:2379"}
	{"level":"info","ts":"2024-08-29T20:41:37.240647Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":684}
	{"level":"info","ts":"2024-08-29T20:41:37.249474Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":684,"took":"8.350072ms","hash":1788932567,"current-db-size-bytes":2207744,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2207744,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-08-29T20:41:37.249872Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1788932567,"revision":684,"compact-revision":-1}
	{"level":"info","ts":"2024-08-29T20:46:37.257079Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":927}
	{"level":"info","ts":"2024-08-29T20:46:37.260928Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":927,"took":"3.196925ms","hash":3721681579,"current-db-size-bytes":2207744,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1556480,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-29T20:46:37.261006Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3721681579,"revision":927,"compact-revision":684}
	{"level":"info","ts":"2024-08-29T20:46:43.069065Z","caller":"traceutil/trace.go:171","msg":"trace[748727337] transaction","detail":"{read_only:false; response_revision:1177; number_of_response:1; }","duration":"121.614354ms","start":"2024-08-29T20:46:42.947421Z","end":"2024-08-29T20:46:43.069035Z","steps":["trace[748727337] 'process raft request'  (duration: 121.484199ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T20:47:09.756638Z","caller":"traceutil/trace.go:171","msg":"trace[327506406] transaction","detail":"{read_only:false; response_revision:1198; number_of_response:1; }","duration":"220.647726ms","start":"2024-08-29T20:47:09.535968Z","end":"2024-08-29T20:47:09.756616Z","steps":["trace[327506406] 'process raft request'  (duration: 220.430011ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T20:47:53.624100Z","caller":"traceutil/trace.go:171","msg":"trace[265970572] transaction","detail":"{read_only:false; response_revision:1235; number_of_response:1; }","duration":"112.624566ms","start":"2024-08-29T20:47:53.511440Z","end":"2024-08-29T20:47:53.624065Z","steps":["trace[265970572] 'process raft request'  (duration: 112.510576ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T20:48:38.061680Z","caller":"traceutil/trace.go:171","msg":"trace[684076624] transaction","detail":"{read_only:false; response_revision:1271; number_of_response:1; }","duration":"109.396288ms","start":"2024-08-29T20:48:37.952259Z","end":"2024-08-29T20:48:38.061655Z","steps":["trace[684076624] 'process raft request'  (duration: 109.174574ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T20:48:41.269724Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.608885ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-29T20:48:41.269897Z","caller":"traceutil/trace.go:171","msg":"trace[799517389] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; response_count:0; response_revision:1274; }","duration":"120.966521ms","start":"2024-08-29T20:48:41.148915Z","end":"2024-08-29T20:48:41.269882Z","steps":["trace[799517389] 'count revisions from in-memory index tree'  (duration: 120.552977ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T20:49:02.343895Z","caller":"traceutil/trace.go:171","msg":"trace[1619402467] transaction","detail":"{read_only:false; response_revision:1291; number_of_response:1; }","duration":"142.372442ms","start":"2024-08-29T20:49:02.201504Z","end":"2024-08-29T20:49:02.343877Z","steps":["trace[1619402467] 'process raft request'  (duration: 142.249163ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T20:49:04.467744Z","caller":"traceutil/trace.go:171","msg":"trace[370125618] transaction","detail":"{read_only:false; response_revision:1292; number_of_response:1; }","duration":"115.770296ms","start":"2024-08-29T20:49:04.351958Z","end":"2024-08-29T20:49:04.467728Z","steps":["trace[370125618] 'process raft request'  (duration: 115.332914ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T20:49:30.754331Z","caller":"traceutil/trace.go:171","msg":"trace[532899793] transaction","detail":"{read_only:false; response_revision:1313; number_of_response:1; }","duration":"147.316885ms","start":"2024-08-29T20:49:30.606921Z","end":"2024-08-29T20:49:30.754238Z","steps":["trace[532899793] 'process raft request'  (duration: 66.156523ms)","trace[532899793] 'compare'  (duration: 80.962258ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-29T20:49:43.074532Z","caller":"traceutil/trace.go:171","msg":"trace[652436582] transaction","detail":"{read_only:false; response_revision:1323; number_of_response:1; }","duration":"251.108513ms","start":"2024-08-29T20:49:42.823405Z","end":"2024-08-29T20:49:43.074514Z","steps":["trace[652436582] 'process raft request'  (duration: 250.990206ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:50:01 up 23 min,  0 users,  load average: 0.24, 0.31, 0.23
	Linux default-k8s-diff-port-145096 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [73b74ec8a00731f45de32583d0f603e164ce0d29fc981ba9d8539c1c794612a0] <==
	E0829 20:46:39.608849       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0829 20:46:39.608895       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0829 20:46:39.610034       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 20:46:39.610144       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0829 20:47:39.610501       1 handler_proxy.go:99] no RequestInfo found in the context
	W0829 20:47:39.610494       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 20:47:39.610727       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0829 20:47:39.610777       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0829 20:47:39.612770       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 20:47:39.612809       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0829 20:49:39.613233       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 20:49:39.613854       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0829 20:49:39.613740       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 20:49:39.614181       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0829 20:49:39.615305       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 20:49:39.615387       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [da6d34394831076ef7f414268020afd8668b079b4c58634f4ff73b97a538b7c4] <==
	W0829 20:31:28.649694       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:28.698204       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:28.716983       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:28.755753       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:28.790787       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:28.800857       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:28.820361       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:28.843784       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:28.875641       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:28.913253       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.007707       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.018169       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.019510       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.041188       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.052810       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.064440       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.179219       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.196063       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.212515       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.266778       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.281148       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.424672       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.458184       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.470653       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:29.821268       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [882e84e9fa32f87b2b6ddae42319c25903c8398224a894c8499553878bc782ab] <==
	E0829 20:44:45.771750       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:44:46.222956       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:45:15.778392       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:45:16.231390       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:45:45.785685       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:45:46.239426       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:46:15.792272       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:46:16.249913       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:46:45.799424       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:46:46.258863       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0829 20:47:09.761081       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-145096"
	E0829 20:47:15.805860       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:47:16.267686       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:47:45.813825       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:47:46.276740       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0829 20:48:00.556760       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="397.494µs"
	I0829 20:48:12.555697       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="223.223µs"
	E0829 20:48:15.821303       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:48:16.287932       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:48:45.829753       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:48:46.298838       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:49:15.835652       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:49:16.308067       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:49:45.842504       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:49:46.316661       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [fb536f9758a829bd1712db0f4afcb55637f0ae9c60271ae7fd453ef123c2f3d8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 20:31:47.487144       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 20:31:47.598392       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.140"]
	E0829 20:31:47.598484       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 20:31:48.104791       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 20:31:48.104831       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 20:31:48.104855       1 server_linux.go:169] "Using iptables Proxier"
	I0829 20:31:48.124657       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 20:31:48.124967       1 server.go:483] "Version info" version="v1.31.0"
	I0829 20:31:48.124978       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 20:31:48.127410       1 config.go:197] "Starting service config controller"
	I0829 20:31:48.127426       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 20:31:48.127445       1 config.go:104] "Starting endpoint slice config controller"
	I0829 20:31:48.127449       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 20:31:48.127858       1 config.go:326] "Starting node config controller"
	I0829 20:31:48.127867       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 20:31:48.229739       1 shared_informer.go:320] Caches are synced for service config
	I0829 20:31:48.229816       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 20:31:48.237673       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [73a7bd61a6fe654d4ad5c149a10789b03edc6d49d5d95bef662753f186c0f929] <==
	W0829 20:31:39.490839       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0829 20:31:39.490990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 20:31:39.575902       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0829 20:31:39.576012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 20:31:39.737398       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0829 20:31:39.737522       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0829 20:31:39.753614       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0829 20:31:39.753672       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 20:31:39.800905       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0829 20:31:39.800957       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 20:31:39.806932       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0829 20:31:39.806985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 20:31:39.857127       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 20:31:39.857255       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 20:31:39.883095       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0829 20:31:39.884716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 20:31:39.894850       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0829 20:31:39.895087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 20:31:39.897205       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 20:31:39.897265       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 20:31:39.902253       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0829 20:31:39.902334       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 20:31:40.066810       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0829 20:31:40.066858       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0829 20:31:41.831974       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 20:48:51 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:48:51.823359    2870 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964531822371676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:49:01 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:49:01.826761    2870 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964541825891164,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:49:01 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:49:01.826854    2870 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964541825891164,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:49:02 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:49:02.541361    2870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6sdqg" podUID="2c9efadb-89bb-4aa6-b0f0-ddcb3e931674"
	Aug 29 20:49:11 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:49:11.829542    2870 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964551829039221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:49:11 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:49:11.830014    2870 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964551829039221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:49:14 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:49:14.539028    2870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6sdqg" podUID="2c9efadb-89bb-4aa6-b0f0-ddcb3e931674"
	Aug 29 20:49:21 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:49:21.832925    2870 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964561832410975,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:49:21 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:49:21.833320    2870 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964561832410975,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:49:28 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:49:28.538146    2870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6sdqg" podUID="2c9efadb-89bb-4aa6-b0f0-ddcb3e931674"
	Aug 29 20:49:31 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:49:31.834864    2870 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964571834490086,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:49:31 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:49:31.834922    2870 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964571834490086,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:49:41 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:49:41.540864    2870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6sdqg" podUID="2c9efadb-89bb-4aa6-b0f0-ddcb3e931674"
	Aug 29 20:49:41 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:49:41.616463    2870 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 20:49:41 default-k8s-diff-port-145096 kubelet[2870]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 20:49:41 default-k8s-diff-port-145096 kubelet[2870]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 20:49:41 default-k8s-diff-port-145096 kubelet[2870]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 20:49:41 default-k8s-diff-port-145096 kubelet[2870]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 20:49:41 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:49:41.837908    2870 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964581837279205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:49:41 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:49:41.837979    2870 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964581837279205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:49:51 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:49:51.840274    2870 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964591839719604,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:49:51 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:49:51.840320    2870 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964591839719604,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:49:56 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:49:56.538821    2870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6sdqg" podUID="2c9efadb-89bb-4aa6-b0f0-ddcb3e931674"
	Aug 29 20:50:01 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:50:01.842744    2870 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964601842280576,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:50:01 default-k8s-diff-port-145096 kubelet[2870]: E0829 20:50:01.842789    2870 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964601842280576,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [e74650e815d0ebb9e571fffeb67d5daf0eecc3b9277d002bf215d8c23e746ce1] <==
	I0829 20:31:48.306160       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 20:31:48.349146       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 20:31:48.349438       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 20:31:48.382051       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 20:31:48.382429       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-145096_d6e631d5-24cb-48c0-ba36-ad4244266dd5!
	I0829 20:31:48.383710       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"725bf0d4-3b04-47ea-a1d5-d42568638d45", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-145096_d6e631d5-24cb-48c0-ba36-ad4244266dd5 became leader
	I0829 20:31:48.483220       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-145096_d6e631d5-24cb-48c0-ba36-ad4244266dd5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-145096 -n default-k8s-diff-port-145096
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-145096 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-6sdqg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-145096 describe pod metrics-server-6867b74b74-6sdqg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-145096 describe pod metrics-server-6867b74b74-6sdqg: exit status 1 (57.573246ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-6sdqg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-145096 describe pod metrics-server-6867b74b74-6sdqg: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (287.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-397724 -n no-preload-397724
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-29 20:46:10.12688458 +0000 UTC m=+6634.297345835
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-397724 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-397724 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.777µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-397724 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-397724 -n no-preload-397724
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-397724 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-397724 logs -n 25: (2.499439529s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-397724                                   | no-preload-397724            | jenkins | v1.33.1 | 29 Aug 24 20:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-388383            | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:18 UTC | 29 Aug 24 20:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-388383                                  | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-695305             | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:19 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-695305                  | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-695305 --memory=2200 --alsologtostderr   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:20 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-695305 image list                           | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	| delete  | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	| start   | -p                                                     | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:21 UTC |
	|         | default-k8s-diff-port-145096                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-032002        | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-397724                  | no-preload-397724            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-397724                                   | no-preload-397724            | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC | 29 Aug 24 20:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-388383                 | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-388383                                  | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC | 29 Aug 24 20:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-145096  | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC | 29 Aug 24 20:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC |                     |
	|         | default-k8s-diff-port-145096                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-032002                              | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:22 UTC | 29 Aug 24 20:22 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-032002             | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:22 UTC | 29 Aug 24 20:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-032002                              | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:22 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-145096       | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:24 UTC | 29 Aug 24 20:31 UTC |
	|         | default-k8s-diff-port-145096                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 20:24:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 20:24:16.618808   68084 out.go:345] Setting OutFile to fd 1 ...
	I0829 20:24:16.619043   68084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:24:16.619051   68084 out.go:358] Setting ErrFile to fd 2...
	I0829 20:24:16.619055   68084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:24:16.619206   68084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 20:24:16.619741   68084 out.go:352] Setting JSON to false
	I0829 20:24:16.620649   68084 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7604,"bootTime":1724955453,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 20:24:16.620702   68084 start.go:139] virtualization: kvm guest
	I0829 20:24:16.622891   68084 out.go:177] * [default-k8s-diff-port-145096] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 20:24:16.624228   68084 out.go:177]   - MINIKUBE_LOCATION=19530
	I0829 20:24:16.624256   68084 notify.go:220] Checking for updates...
	I0829 20:24:16.627123   68084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 20:24:16.628611   68084 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:24:16.629858   68084 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 20:24:16.631013   68084 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 20:24:16.632116   68084 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 20:24:16.633630   68084 config.go:182] Loaded profile config "default-k8s-diff-port-145096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:24:16.634042   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:24:16.634080   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:24:16.648879   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36381
	I0829 20:24:16.649315   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:24:16.649875   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:24:16.649893   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:24:16.650274   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:24:16.650504   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:24:16.650776   68084 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 20:24:16.651053   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:24:16.651111   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:24:16.665964   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33615
	I0829 20:24:16.666402   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:24:16.666918   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:24:16.666937   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:24:16.667250   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:24:16.667435   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:24:16.698712   68084 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 20:24:16.700010   68084 start.go:297] selected driver: kvm2
	I0829 20:24:16.700023   68084 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-145096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:24:16.700131   68084 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 20:24:16.700915   68084 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 20:24:16.700998   68084 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19530-11185/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 20:24:16.715940   68084 install.go:137] /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0829 20:24:16.716321   68084 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:24:16.716388   68084 cni.go:84] Creating CNI manager for ""
	I0829 20:24:16.716405   68084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:24:16.716452   68084 start.go:340] cluster config:
	{Name:default-k8s-diff-port-145096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:24:16.716563   68084 iso.go:125] acquiring lock: {Name:mk1c9d3ac7f423dd4657884e37bdf4359f6328d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 20:24:16.718175   68084 out.go:177] * Starting "default-k8s-diff-port-145096" primary control-plane node in "default-k8s-diff-port-145096" cluster
	I0829 20:24:16.258820   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:16.719204   68084 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:24:16.719231   68084 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 20:24:16.719237   68084 cache.go:56] Caching tarball of preloaded images
	I0829 20:24:16.719296   68084 preload.go:172] Found /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 20:24:16.719305   68084 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 20:24:16.719385   68084 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/config.json ...
	I0829 20:24:16.719549   68084 start.go:360] acquireMachinesLock for default-k8s-diff-port-145096: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 20:24:22.338805   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:25.410778   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:31.490844   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:34.562885   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:40.642793   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:43.714939   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:49.794765   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:52.866858   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:58.946771   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:02.018832   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:08.098829   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:11.170833   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:17.250794   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:20.322926   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:26.402827   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:29.474844   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:35.554771   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:38.626850   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:41.630257   66989 start.go:364] duration metric: took 4m26.950412835s to acquireMachinesLock for "embed-certs-388383"
	I0829 20:25:41.630308   66989 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:25:41.630316   66989 fix.go:54] fixHost starting: 
	I0829 20:25:41.630791   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:25:41.630828   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:25:41.646005   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32873
	I0829 20:25:41.646405   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:25:41.646932   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:25:41.646959   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:25:41.647308   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:25:41.647525   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:25:41.647686   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:25:41.649457   66989 fix.go:112] recreateIfNeeded on embed-certs-388383: state=Stopped err=<nil>
	I0829 20:25:41.649491   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	W0829 20:25:41.649639   66989 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:25:41.651109   66989 out.go:177] * Restarting existing kvm2 VM for "embed-certs-388383" ...
	I0829 20:25:41.627651   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:25:41.627705   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:25:41.628067   66841 buildroot.go:166] provisioning hostname "no-preload-397724"
	I0829 20:25:41.628089   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:25:41.628259   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:25:41.630106   66841 machine.go:96] duration metric: took 4m35.46951337s to provisionDockerMachine
	I0829 20:25:41.630148   66841 fix.go:56] duration metric: took 4m35.494271139s for fixHost
	I0829 20:25:41.630159   66841 start.go:83] releasing machines lock for "no-preload-397724", held for 4m35.494325078s
	W0829 20:25:41.630182   66841 start.go:714] error starting host: provision: host is not running
	W0829 20:25:41.630284   66841 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0829 20:25:41.630295   66841 start.go:729] Will try again in 5 seconds ...
	I0829 20:25:41.652159   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Start
	I0829 20:25:41.652318   66989 main.go:141] libmachine: (embed-certs-388383) Ensuring networks are active...
	I0829 20:25:41.653011   66989 main.go:141] libmachine: (embed-certs-388383) Ensuring network default is active
	I0829 20:25:41.653426   66989 main.go:141] libmachine: (embed-certs-388383) Ensuring network mk-embed-certs-388383 is active
	I0829 20:25:41.653824   66989 main.go:141] libmachine: (embed-certs-388383) Getting domain xml...
	I0829 20:25:41.654765   66989 main.go:141] libmachine: (embed-certs-388383) Creating domain...
	I0829 20:25:42.860512   66989 main.go:141] libmachine: (embed-certs-388383) Waiting to get IP...
	I0829 20:25:42.861297   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:42.861661   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:42.861739   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:42.861649   68412 retry.go:31] will retry after 207.172422ms: waiting for machine to come up
	I0829 20:25:43.070026   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:43.070414   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:43.070445   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:43.070368   68412 retry.go:31] will retry after 336.815982ms: waiting for machine to come up
	I0829 20:25:43.408817   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:43.409144   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:43.409182   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:43.409117   68412 retry.go:31] will retry after 330.159156ms: waiting for machine to come up
	I0829 20:25:43.740518   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:43.741039   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:43.741065   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:43.741002   68412 retry.go:31] will retry after 528.906592ms: waiting for machine to come up
	I0829 20:25:44.271695   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:44.272286   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:44.272344   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:44.272280   68412 retry.go:31] will retry after 616.92568ms: waiting for machine to come up
	I0829 20:25:46.631383   66841 start.go:360] acquireMachinesLock for no-preload-397724: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 20:25:44.891133   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:44.891535   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:44.891566   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:44.891499   68412 retry.go:31] will retry after 907.330558ms: waiting for machine to come up
	I0829 20:25:45.800480   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:45.800858   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:45.800885   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:45.800840   68412 retry.go:31] will retry after 1.189775318s: waiting for machine to come up
	I0829 20:25:46.992687   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:46.993155   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:46.993189   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:46.993142   68412 retry.go:31] will retry after 1.467244635s: waiting for machine to come up
	I0829 20:25:48.462770   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:48.463201   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:48.463226   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:48.463173   68412 retry.go:31] will retry after 1.602764839s: waiting for machine to come up
	I0829 20:25:50.067082   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:50.067608   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:50.067638   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:50.067543   68412 retry.go:31] will retry after 1.562244323s: waiting for machine to come up
	I0829 20:25:51.632201   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:51.632705   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:51.632731   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:51.632650   68412 retry.go:31] will retry after 1.747220365s: waiting for machine to come up
	I0829 20:25:53.382010   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:53.382463   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:53.382527   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:53.382454   68412 retry.go:31] will retry after 3.446054845s: waiting for machine to come up
	I0829 20:25:56.830511   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:56.830954   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:56.830988   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:56.830908   68412 retry.go:31] will retry after 4.53995219s: waiting for machine to come up
	I0829 20:26:02.603329   67607 start.go:364] duration metric: took 3m23.680319578s to acquireMachinesLock for "old-k8s-version-032002"
	I0829 20:26:02.603393   67607 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:26:02.603404   67607 fix.go:54] fixHost starting: 
	I0829 20:26:02.603837   67607 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:02.603884   67607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:02.621398   67607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35977
	I0829 20:26:02.621840   67607 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:02.622425   67607 main.go:141] libmachine: Using API Version  1
	I0829 20:26:02.622460   67607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:02.622810   67607 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:02.623040   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:02.623201   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetState
	I0829 20:26:02.624854   67607 fix.go:112] recreateIfNeeded on old-k8s-version-032002: state=Stopped err=<nil>
	I0829 20:26:02.624880   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	W0829 20:26:02.625020   67607 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:26:02.627161   67607 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-032002" ...
	I0829 20:26:02.628419   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .Start
	I0829 20:26:02.628578   67607 main.go:141] libmachine: (old-k8s-version-032002) Ensuring networks are active...
	I0829 20:26:02.629339   67607 main.go:141] libmachine: (old-k8s-version-032002) Ensuring network default is active
	I0829 20:26:02.629732   67607 main.go:141] libmachine: (old-k8s-version-032002) Ensuring network mk-old-k8s-version-032002 is active
	I0829 20:26:02.630188   67607 main.go:141] libmachine: (old-k8s-version-032002) Getting domain xml...
	I0829 20:26:02.630924   67607 main.go:141] libmachine: (old-k8s-version-032002) Creating domain...
	I0829 20:26:01.375542   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.375928   66989 main.go:141] libmachine: (embed-certs-388383) Found IP for machine: 192.168.61.202
	I0829 20:26:01.375951   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has current primary IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.375974   66989 main.go:141] libmachine: (embed-certs-388383) Reserving static IP address...
	I0829 20:26:01.376364   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "embed-certs-388383", mac: "52:54:00:6c:5a:0c", ip: "192.168.61.202"} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.376398   66989 main.go:141] libmachine: (embed-certs-388383) DBG | skip adding static IP to network mk-embed-certs-388383 - found existing host DHCP lease matching {name: "embed-certs-388383", mac: "52:54:00:6c:5a:0c", ip: "192.168.61.202"}
	I0829 20:26:01.376411   66989 main.go:141] libmachine: (embed-certs-388383) Reserved static IP address: 192.168.61.202
	I0829 20:26:01.376428   66989 main.go:141] libmachine: (embed-certs-388383) Waiting for SSH to be available...
	I0829 20:26:01.376445   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Getting to WaitForSSH function...
	I0829 20:26:01.378600   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.378899   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.378937   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.379065   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Using SSH client type: external
	I0829 20:26:01.379088   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa (-rw-------)
	I0829 20:26:01.379118   66989 main.go:141] libmachine: (embed-certs-388383) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:01.379132   66989 main.go:141] libmachine: (embed-certs-388383) DBG | About to run SSH command:
	I0829 20:26:01.379141   66989 main.go:141] libmachine: (embed-certs-388383) DBG | exit 0
	I0829 20:26:01.498736   66989 main.go:141] libmachine: (embed-certs-388383) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:01.499103   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetConfigRaw
	I0829 20:26:01.499700   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetIP
	I0829 20:26:01.502022   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.502332   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.502362   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.502586   66989 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/config.json ...
	I0829 20:26:01.502778   66989 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:01.502795   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:01.502980   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.505156   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.505452   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.505473   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.505590   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:01.505739   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.505902   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.506038   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:01.506183   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:01.506366   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:01.506376   66989 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:01.602691   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:01.602721   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetMachineName
	I0829 20:26:01.603002   66989 buildroot.go:166] provisioning hostname "embed-certs-388383"
	I0829 20:26:01.603033   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetMachineName
	I0829 20:26:01.603232   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.605841   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.606170   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.606201   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.606333   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:01.606505   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.606672   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.606786   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:01.606950   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:01.607121   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:01.607144   66989 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-388383 && echo "embed-certs-388383" | sudo tee /etc/hostname
	I0829 20:26:01.717669   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-388383
	
	I0829 20:26:01.717709   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.720400   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.720705   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.720733   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.720863   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:01.721097   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.721280   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.721446   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:01.721585   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:01.721811   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:01.721842   66989 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-388383' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-388383/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-388383' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:01.827800   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:01.827835   66989 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:01.827869   66989 buildroot.go:174] setting up certificates
	I0829 20:26:01.827882   66989 provision.go:84] configureAuth start
	I0829 20:26:01.827894   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetMachineName
	I0829 20:26:01.828214   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetIP
	I0829 20:26:01.830619   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.831150   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.831184   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.831339   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.833642   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.833961   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.833987   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.834161   66989 provision.go:143] copyHostCerts
	I0829 20:26:01.834217   66989 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:01.834241   66989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:01.834322   66989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:01.834445   66989 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:01.834457   66989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:01.834491   66989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:01.834608   66989 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:01.834621   66989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:01.834660   66989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:01.834726   66989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.embed-certs-388383 san=[127.0.0.1 192.168.61.202 embed-certs-388383 localhost minikube]
	I0829 20:26:01.992735   66989 provision.go:177] copyRemoteCerts
	I0829 20:26:01.992794   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:01.992819   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.995463   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.995835   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.995862   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.996006   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:01.996179   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.996333   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:01.996460   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:02.077017   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:02.105498   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0829 20:26:02.133974   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 20:26:02.161330   66989 provision.go:87] duration metric: took 333.435119ms to configureAuth
	I0829 20:26:02.161362   66989 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:02.161579   66989 config.go:182] Loaded profile config "embed-certs-388383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:26:02.161707   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.164373   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.164696   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.164724   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.164909   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.165111   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.165276   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.165402   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.165535   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:02.165697   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:02.165711   66989 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:02.377994   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:02.378022   66989 machine.go:96] duration metric: took 875.231112ms to provisionDockerMachine
	I0829 20:26:02.378037   66989 start.go:293] postStartSetup for "embed-certs-388383" (driver="kvm2")
	I0829 20:26:02.378053   66989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:02.378078   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.378404   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:02.378432   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.380920   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.381329   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.381358   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.381564   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.381797   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.381975   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.382124   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:02.461053   66989 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:02.465391   66989 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:02.465417   66989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:02.465479   66989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:02.465550   66989 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:02.465635   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:02.474909   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:02.500025   66989 start.go:296] duration metric: took 121.973853ms for postStartSetup
	I0829 20:26:02.500064   66989 fix.go:56] duration metric: took 20.86974885s for fixHost
	I0829 20:26:02.500082   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.502976   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.503380   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.503411   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.503599   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.503808   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.503976   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.504126   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.504283   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:02.504459   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:02.504469   66989 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:02.603161   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963162.568310162
	
	I0829 20:26:02.603181   66989 fix.go:216] guest clock: 1724963162.568310162
	I0829 20:26:02.603187   66989 fix.go:229] Guest: 2024-08-29 20:26:02.568310162 +0000 UTC Remote: 2024-08-29 20:26:02.500067292 +0000 UTC m=+288.185978445 (delta=68.24287ms)
	I0829 20:26:02.603210   66989 fix.go:200] guest clock delta is within tolerance: 68.24287ms
	I0829 20:26:02.603216   66989 start.go:83] releasing machines lock for "embed-certs-388383", held for 20.972921408s
	I0829 20:26:02.603248   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.603532   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetIP
	I0829 20:26:02.606426   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.606804   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.606834   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.607021   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.607527   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.607694   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.607770   66989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:02.607809   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.607878   66989 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:02.607896   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.610239   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.610264   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.610657   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.610685   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.610723   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.610742   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.610844   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.611014   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.611014   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.611145   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.611208   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.611268   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.611341   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:02.611399   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:02.712435   66989 ssh_runner.go:195] Run: systemctl --version
	I0829 20:26:02.718614   66989 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:26:02.865138   66989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:26:02.871510   66989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:26:02.871593   66989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:26:02.887316   66989 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:26:02.887340   66989 start.go:495] detecting cgroup driver to use...
	I0829 20:26:02.887394   66989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:26:02.905024   66989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:26:02.918922   66989 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:26:02.918986   66989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:26:02.932660   66989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:26:02.946679   66989 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:26:03.056273   66989 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:26:03.216885   66989 docker.go:233] disabling docker service ...
	I0829 20:26:03.216959   66989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:26:03.231363   66989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:26:03.245609   66989 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:26:03.368087   66989 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:26:03.493947   66989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:26:03.508803   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:26:03.527542   66989 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 20:26:03.527607   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.538301   66989 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:26:03.538370   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.549672   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.562203   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.573572   66989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:26:03.585031   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.596778   66989 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.619405   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.630337   66989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:26:03.640492   66989 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:26:03.640568   66989 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:26:03.657931   66989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:26:03.673756   66989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:03.792856   66989 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:26:03.880493   66989 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:26:03.880551   66989 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:26:03.885793   66989 start.go:563] Will wait 60s for crictl version
	I0829 20:26:03.885850   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:26:03.889835   66989 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:26:03.928633   66989 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:26:03.928702   66989 ssh_runner.go:195] Run: crio --version
	I0829 20:26:03.958861   66989 ssh_runner.go:195] Run: crio --version
	I0829 20:26:03.987724   66989 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 20:26:03.989009   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetIP
	I0829 20:26:03.991889   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:03.992308   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:03.992334   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:03.992567   66989 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0829 20:26:03.996945   66989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:04.009353   66989 kubeadm.go:883] updating cluster {Name:embed-certs-388383 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-388383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:26:04.009462   66989 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:26:04.009501   66989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:04.051583   66989 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 20:26:04.051643   66989 ssh_runner.go:195] Run: which lz4
	I0829 20:26:04.055929   66989 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 20:26:04.060214   66989 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 20:26:04.060240   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 20:26:03.867691   67607 main.go:141] libmachine: (old-k8s-version-032002) Waiting to get IP...
	I0829 20:26:03.868798   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:03.869246   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:03.869318   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:03.869235   68552 retry.go:31] will retry after 220.928648ms: waiting for machine to come up
	I0829 20:26:04.091675   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:04.092057   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:04.092084   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:04.092020   68552 retry.go:31] will retry after 352.781755ms: waiting for machine to come up
	I0829 20:26:04.446766   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:04.447277   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:04.447301   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:04.447224   68552 retry.go:31] will retry after 480.96031ms: waiting for machine to come up
	I0829 20:26:04.929561   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:04.930149   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:04.930181   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:04.930051   68552 retry.go:31] will retry after 415.057247ms: waiting for machine to come up
	I0829 20:26:05.346757   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:05.347224   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:05.347258   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:05.347196   68552 retry.go:31] will retry after 609.958508ms: waiting for machine to come up
	I0829 20:26:05.959227   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:05.959774   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:05.959825   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:05.959702   68552 retry.go:31] will retry after 680.801337ms: waiting for machine to come up
	I0829 20:26:06.642811   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:06.643312   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:06.643343   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:06.643269   68552 retry.go:31] will retry after 995.561322ms: waiting for machine to come up
	I0829 20:26:07.640147   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:07.640617   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:07.640652   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:07.640588   68552 retry.go:31] will retry after 1.22436043s: waiting for machine to come up
	I0829 20:26:05.472272   66989 crio.go:462] duration metric: took 1.416373513s to copy over tarball
	I0829 20:26:05.472355   66989 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 20:26:07.583560   66989 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.111164398s)
	I0829 20:26:07.583595   66989 crio.go:469] duration metric: took 2.111297179s to extract the tarball
	I0829 20:26:07.583605   66989 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 20:26:07.622447   66989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:07.671704   66989 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 20:26:07.671732   66989 cache_images.go:84] Images are preloaded, skipping loading
	I0829 20:26:07.671742   66989 kubeadm.go:934] updating node { 192.168.61.202 8443 v1.31.0 crio true true} ...
	I0829 20:26:07.671869   66989 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-388383 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-388383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:26:07.671958   66989 ssh_runner.go:195] Run: crio config
	I0829 20:26:07.717217   66989 cni.go:84] Creating CNI manager for ""
	I0829 20:26:07.717242   66989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:07.717263   66989 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:26:07.717290   66989 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.202 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-388383 NodeName:embed-certs-388383 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 20:26:07.717465   66989 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-388383"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:26:07.717549   66989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 20:26:07.727174   66989 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:26:07.727258   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:26:07.736512   66989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0829 20:26:07.752727   66989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:26:07.772430   66989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0829 20:26:07.793343   66989 ssh_runner.go:195] Run: grep 192.168.61.202	control-plane.minikube.internal$ /etc/hosts
	I0829 20:26:07.798214   66989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:07.811285   66989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:07.927025   66989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:07.943741   66989 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383 for IP: 192.168.61.202
	I0829 20:26:07.943765   66989 certs.go:194] generating shared ca certs ...
	I0829 20:26:07.943784   66989 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:07.943984   66989 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:26:07.944047   66989 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:26:07.944061   66989 certs.go:256] generating profile certs ...
	I0829 20:26:07.944177   66989 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/client.key
	I0829 20:26:07.944254   66989 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/apiserver.key.03b29390
	I0829 20:26:07.944317   66989 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/proxy-client.key
	I0829 20:26:07.944494   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:26:07.944538   66989 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:26:07.944551   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:26:07.944581   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:26:07.944605   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:26:07.944628   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:26:07.944670   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:07.945252   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:26:07.971277   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:26:08.012892   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:26:08.042038   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:26:08.067708   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0829 20:26:08.095930   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 20:26:08.127171   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:26:08.151287   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 20:26:08.175525   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:26:08.199076   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:26:08.222783   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:26:08.245783   66989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:26:08.261839   66989 ssh_runner.go:195] Run: openssl version
	I0829 20:26:08.267545   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:26:08.278347   66989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:26:08.284232   66989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:26:08.284283   66989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:26:08.292024   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:26:08.306831   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:26:08.320607   66989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:26:08.325027   66989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:26:08.325070   66989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:26:08.330808   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:26:08.341457   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:26:08.352323   66989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:08.356822   66989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:08.356891   66989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:08.362617   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:26:08.373755   66989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:26:08.378153   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:26:08.384225   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:26:08.390136   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:26:08.396002   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:26:08.401713   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:26:08.407437   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:26:08.413033   66989 kubeadm.go:392] StartCluster: {Name:embed-certs-388383 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-388383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:26:08.413119   66989 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:26:08.413173   66989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:08.450685   66989 cri.go:89] found id: ""
	I0829 20:26:08.450757   66989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:26:08.460787   66989 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:26:08.460809   66989 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:26:08.460853   66989 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:26:08.470179   66989 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:26:08.471673   66989 kubeconfig.go:125] found "embed-certs-388383" server: "https://192.168.61.202:8443"
	I0829 20:26:08.474839   66989 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:26:08.483951   66989 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.202
	I0829 20:26:08.483992   66989 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:26:08.484007   66989 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:26:08.484085   66989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:08.525947   66989 cri.go:89] found id: ""
	I0829 20:26:08.526013   66989 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:26:08.541862   66989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:26:08.551179   66989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:26:08.551200   66989 kubeadm.go:157] found existing configuration files:
	
	I0829 20:26:08.551249   66989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:26:08.559897   66989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:26:08.559970   66989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:26:08.569317   66989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:26:08.577858   66989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:26:08.577905   66989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:26:08.587113   66989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:26:08.595645   66989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:26:08.595705   66989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:26:08.604803   66989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:26:08.613070   66989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:26:08.613125   66989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:26:08.622037   66989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:26:08.631330   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:08.742682   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:08.866518   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:08.866954   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:08.866985   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:08.866896   68552 retry.go:31] will retry after 1.707701085s: waiting for machine to come up
	I0829 20:26:10.576676   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:10.577094   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:10.577124   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:10.577047   68552 retry.go:31] will retry after 1.496799212s: waiting for machine to come up
	I0829 20:26:12.075964   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:12.076412   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:12.076451   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:12.076377   68552 retry.go:31] will retry after 2.246779697s: waiting for machine to come up
	I0829 20:26:09.809078   66989 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.066360218s)
	I0829 20:26:09.809118   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:10.027517   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:10.095959   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:10.199656   66989 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:26:10.199745   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:10.700569   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:11.200798   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:11.700664   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:12.200052   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:12.700839   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:12.715319   66989 api_server.go:72] duration metric: took 2.515661322s to wait for apiserver process to appear ...
	I0829 20:26:12.715351   66989 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:26:12.715374   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:15.687527   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:26:15.687558   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:26:15.687572   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:15.716339   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:26:15.716365   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:26:15.716378   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:15.750700   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:15.750732   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:16.216255   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:16.224376   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:16.224401   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:16.715457   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:16.723983   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:16.724004   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:17.215562   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:17.219605   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0829 20:26:17.225473   66989 api_server.go:141] control plane version: v1.31.0
	I0829 20:26:17.225496   66989 api_server.go:131] duration metric: took 4.510137186s to wait for apiserver health ...
	I0829 20:26:17.225504   66989 cni.go:84] Creating CNI manager for ""
	I0829 20:26:17.225509   66989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:17.227379   66989 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:26:14.324452   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:14.324770   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:14.324808   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:14.324748   68552 retry.go:31] will retry after 3.172592587s: waiting for machine to come up
	I0829 20:26:17.500203   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:17.500540   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:17.500573   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:17.500485   68552 retry.go:31] will retry after 2.81386002s: waiting for machine to come up
	I0829 20:26:17.228505   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:26:17.238762   66989 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:26:17.264380   66989 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:26:17.274981   66989 system_pods.go:59] 8 kube-system pods found
	I0829 20:26:17.275009   66989 system_pods.go:61] "coredns-6f6b679f8f-dg6t6" [92e89b20-ebf4-4738-8ca7-9dc2a0e5653a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:26:17.275016   66989 system_pods.go:61] "etcd-embed-certs-388383" [a688325a-9ed2-488d-a1a1-aa440e37fa9f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 20:26:17.275023   66989 system_pods.go:61] "kube-apiserver-embed-certs-388383" [7a1b715b-87a3-44e0-868d-a3184f5b9f61] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 20:26:17.275028   66989 system_pods.go:61] "kube-controller-manager-embed-certs-388383" [9d942083-4d39-448c-8151-424ea9d5e6af] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 20:26:17.275033   66989 system_pods.go:61] "kube-proxy-fcxs4" [649b40c8-4f4b-40d1-8179-baf378d4c7d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 20:26:17.275038   66989 system_pods.go:61] "kube-scheduler-embed-certs-388383" [87b73013-dfad-411d-aaa9-f2c0e39fb920] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 20:26:17.275043   66989 system_pods.go:61] "metrics-server-6867b74b74-mx5jh" [99e21acd-b7b8-4e6f-8c75-c112206aed89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:26:17.275048   66989 system_pods.go:61] "storage-provisioner" [021ca156-b7a8-4647-8efe-db17968fd5a8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 20:26:17.275056   66989 system_pods.go:74] duration metric: took 10.656426ms to wait for pod list to return data ...
	I0829 20:26:17.275074   66989 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:26:17.279480   66989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:26:17.279504   66989 node_conditions.go:123] node cpu capacity is 2
	I0829 20:26:17.279519   66989 node_conditions.go:105] duration metric: took 4.439469ms to run NodePressure ...
	I0829 20:26:17.279537   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:17.561282   66989 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 20:26:17.565287   66989 kubeadm.go:739] kubelet initialised
	I0829 20:26:17.565307   66989 kubeadm.go:740] duration metric: took 4.002605ms waiting for restarted kubelet to initialise ...
	I0829 20:26:17.565314   66989 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:26:17.570104   66989 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:17.576425   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.576454   66989 pod_ready.go:82] duration metric: took 6.324083ms for pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:17.576464   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.576474   66989 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:17.582501   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "etcd-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.582523   66989 pod_ready.go:82] duration metric: took 6.040325ms for pod "etcd-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:17.582547   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "etcd-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.582556   66989 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:17.588534   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.588554   66989 pod_ready.go:82] duration metric: took 5.988678ms for pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:17.588562   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.588568   66989 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:17.668334   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.668365   66989 pod_ready.go:82] duration metric: took 79.787211ms for pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:17.668378   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.668386   66989 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fcxs4" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:18.068248   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "kube-proxy-fcxs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.068286   66989 pod_ready.go:82] duration metric: took 399.880238ms for pod "kube-proxy-fcxs4" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:18.068299   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "kube-proxy-fcxs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.068308   66989 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:18.468096   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.468126   66989 pod_ready.go:82] duration metric: took 399.810823ms for pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:18.468134   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.468141   66989 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:18.868444   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.868478   66989 pod_ready.go:82] duration metric: took 400.329102ms for pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:18.868490   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.868499   66989 pod_ready.go:39] duration metric: took 1.303176044s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:26:18.868519   66989 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 20:26:18.880892   66989 ops.go:34] apiserver oom_adj: -16
	I0829 20:26:18.880916   66989 kubeadm.go:597] duration metric: took 10.42010114s to restartPrimaryControlPlane
	I0829 20:26:18.880925   66989 kubeadm.go:394] duration metric: took 10.467899141s to StartCluster
	I0829 20:26:18.880946   66989 settings.go:142] acquiring lock: {Name:mka4cd5ddff5796cd0ca11509c181178f4f73529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:18.881032   66989 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:26:18.884130   66989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:18.884619   66989 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 20:26:18.884674   66989 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 20:26:18.884749   66989 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-388383"
	I0829 20:26:18.884765   66989 addons.go:69] Setting default-storageclass=true in profile "embed-certs-388383"
	I0829 20:26:18.884783   66989 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-388383"
	W0829 20:26:18.884792   66989 addons.go:243] addon storage-provisioner should already be in state true
	I0829 20:26:18.884804   66989 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-388383"
	I0829 20:26:18.884816   66989 addons.go:69] Setting metrics-server=true in profile "embed-certs-388383"
	I0829 20:26:18.884828   66989 host.go:66] Checking if "embed-certs-388383" exists ...
	I0829 20:26:18.884856   66989 addons.go:234] Setting addon metrics-server=true in "embed-certs-388383"
	W0829 20:26:18.884877   66989 addons.go:243] addon metrics-server should already be in state true
	I0829 20:26:18.884884   66989 config.go:182] Loaded profile config "embed-certs-388383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:26:18.884912   66989 host.go:66] Checking if "embed-certs-388383" exists ...
	I0829 20:26:18.885134   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.885176   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.885216   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.885249   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.885291   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.885338   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.886484   66989 out.go:177] * Verifying Kubernetes components...
	I0829 20:26:18.887938   66989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:18.900910   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33641
	I0829 20:26:18.901377   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.901917   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.901938   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.902300   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.903062   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.903110   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.903810   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I0829 20:26:18.903824   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38101
	I0829 20:26:18.904282   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.904303   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.904673   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.904691   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.904829   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.904845   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.905017   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.905428   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.905462   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.905664   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.905860   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:26:18.909388   66989 addons.go:234] Setting addon default-storageclass=true in "embed-certs-388383"
	W0829 20:26:18.909408   66989 addons.go:243] addon default-storageclass should already be in state true
	I0829 20:26:18.909437   66989 host.go:66] Checking if "embed-certs-388383" exists ...
	I0829 20:26:18.909793   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.909839   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.921180   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35467
	I0829 20:26:18.921597   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.922074   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.922087   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.922470   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.922697   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:26:18.922725   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39123
	I0829 20:26:18.923052   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.923592   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.923610   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.923919   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.924057   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:26:18.924063   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45681
	I0829 20:26:18.924461   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.924519   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:18.924984   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.925002   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.925632   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.925682   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:18.926152   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.926194   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.926494   66989 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:18.927266   66989 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 20:26:18.928130   66989 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:26:18.928141   66989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 20:26:18.928155   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:18.928843   66989 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 20:26:18.928863   66989 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 20:26:18.928888   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:18.931716   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.932273   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:18.932296   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.932424   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.932456   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:18.932644   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:18.932810   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:18.932869   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:18.932891   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.933050   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:18.933100   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:18.933271   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:18.933426   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:18.933598   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:18.942718   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38109
	I0829 20:26:18.943150   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.943532   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.943553   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.943908   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.944027   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:26:18.945304   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:18.945498   66989 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 20:26:18.945510   66989 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 20:26:18.945522   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:18.948108   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.948469   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:18.948494   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.948730   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:18.948889   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:18.949085   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:18.949222   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:19.111953   66989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:19.131195   66989 node_ready.go:35] waiting up to 6m0s for node "embed-certs-388383" to be "Ready" ...
	I0829 20:26:19.246857   66989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:26:19.269511   66989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 20:26:19.269670   66989 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 20:26:19.269691   66989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 20:26:19.346200   66989 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 20:26:19.346234   66989 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 20:26:19.374530   66989 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:26:19.374566   66989 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 20:26:19.418474   66989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:26:20.495022   66989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.225476769s)
	I0829 20:26:20.495077   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.495090   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.495185   66989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.248286753s)
	I0829 20:26:20.495232   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.495249   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.495572   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.495600   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.495611   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.495619   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.495634   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.495663   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Closing plugin on server side
	I0829 20:26:20.495664   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.495678   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.495688   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.496014   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.496029   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.496061   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Closing plugin on server side
	I0829 20:26:20.496097   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.496111   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.504149   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.504182   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.504419   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.504436   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.519341   66989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.100829284s)
	I0829 20:26:20.519396   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.519422   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.519670   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Closing plugin on server side
	I0829 20:26:20.519716   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.519734   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.519746   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.519755   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.520040   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.520055   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.520072   66989 addons.go:475] Verifying addon metrics-server=true in "embed-certs-388383"
	I0829 20:26:20.523102   66989 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0829 20:26:21.515365   68084 start.go:364] duration metric: took 2m4.795762476s to acquireMachinesLock for "default-k8s-diff-port-145096"
	I0829 20:26:21.515428   68084 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:26:21.515439   68084 fix.go:54] fixHost starting: 
	I0829 20:26:21.515864   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:21.515904   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:21.535441   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33171
	I0829 20:26:21.535886   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:21.536390   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:26:21.536414   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:21.536819   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:21.537035   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:21.537203   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:26:21.538735   68084 fix.go:112] recreateIfNeeded on default-k8s-diff-port-145096: state=Stopped err=<nil>
	I0829 20:26:21.538762   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	W0829 20:26:21.538901   68084 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:26:21.540852   68084 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-145096" ...
	I0829 20:26:21.542258   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Start
	I0829 20:26:21.542429   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Ensuring networks are active...
	I0829 20:26:21.543181   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Ensuring network default is active
	I0829 20:26:21.543522   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Ensuring network mk-default-k8s-diff-port-145096 is active
	I0829 20:26:21.543872   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Getting domain xml...
	I0829 20:26:21.544627   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Creating domain...
	I0829 20:26:20.317138   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.317672   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has current primary IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.317700   67607 main.go:141] libmachine: (old-k8s-version-032002) Found IP for machine: 192.168.39.116
	I0829 20:26:20.317716   67607 main.go:141] libmachine: (old-k8s-version-032002) Reserving static IP address...
	I0829 20:26:20.318143   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "old-k8s-version-032002", mac: "52:54:00:a8:ca:96", ip: "192.168.39.116"} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.318169   67607 main.go:141] libmachine: (old-k8s-version-032002) Reserved static IP address: 192.168.39.116
	I0829 20:26:20.318189   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | skip adding static IP to network mk-old-k8s-version-032002 - found existing host DHCP lease matching {name: "old-k8s-version-032002", mac: "52:54:00:a8:ca:96", ip: "192.168.39.116"}
	I0829 20:26:20.318208   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | Getting to WaitForSSH function...
	I0829 20:26:20.318217   67607 main.go:141] libmachine: (old-k8s-version-032002) Waiting for SSH to be available...
	I0829 20:26:20.320598   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.320961   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.320989   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.321082   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | Using SSH client type: external
	I0829 20:26:20.321121   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa (-rw-------)
	I0829 20:26:20.321156   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:20.321171   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | About to run SSH command:
	I0829 20:26:20.321185   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | exit 0
	I0829 20:26:20.446805   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:20.447204   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetConfigRaw
	I0829 20:26:20.447944   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:20.450726   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.451120   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.451160   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.451464   67607 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/config.json ...
	I0829 20:26:20.451670   67607 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:20.451690   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:20.451886   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.454120   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.454496   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.454566   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.454648   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.454808   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.454975   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.455123   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.455282   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:20.455520   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:20.455533   67607 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:20.555074   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:20.555100   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetMachineName
	I0829 20:26:20.555331   67607 buildroot.go:166] provisioning hostname "old-k8s-version-032002"
	I0829 20:26:20.555353   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetMachineName
	I0829 20:26:20.555540   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.558576   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.559058   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.559086   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.559273   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.559490   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.559661   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.559834   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.560026   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:20.560189   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:20.560201   67607 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-032002 && echo "old-k8s-version-032002" | sudo tee /etc/hostname
	I0829 20:26:20.675352   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-032002
	
	I0829 20:26:20.675400   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.678472   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.678908   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.678944   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.679139   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.679341   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.679533   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.679710   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.679884   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:20.680090   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:20.680108   67607 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-032002' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-032002/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-032002' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:20.789673   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:20.789713   67607 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:20.789744   67607 buildroot.go:174] setting up certificates
	I0829 20:26:20.789753   67607 provision.go:84] configureAuth start
	I0829 20:26:20.789761   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetMachineName
	I0829 20:26:20.790067   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:20.792822   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.793152   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.793173   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.793338   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.795624   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.795948   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.795974   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.796080   67607 provision.go:143] copyHostCerts
	I0829 20:26:20.796148   67607 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:20.796168   67607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:20.796236   67607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:20.796344   67607 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:20.796355   67607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:20.796387   67607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:20.796467   67607 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:20.796476   67607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:20.796503   67607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:20.796573   67607 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-032002 san=[127.0.0.1 192.168.39.116 localhost minikube old-k8s-version-032002]
	I0829 20:26:20.906382   67607 provision.go:177] copyRemoteCerts
	I0829 20:26:20.906436   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:20.906466   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.909180   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.909488   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.909519   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.909666   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.909831   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.909963   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.910062   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:20.989017   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:21.018571   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0829 20:26:21.043015   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 20:26:21.067288   67607 provision.go:87] duration metric: took 277.522292ms to configureAuth
	I0829 20:26:21.067322   67607 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:21.067527   67607 config.go:182] Loaded profile config "old-k8s-version-032002": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0829 20:26:21.067607   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.070264   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.070642   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.070679   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.070881   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.071088   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.071288   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.071465   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.071661   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:21.071886   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:21.071923   67607 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:21.290979   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:21.291003   67607 machine.go:96] duration metric: took 839.319831ms to provisionDockerMachine
	I0829 20:26:21.291014   67607 start.go:293] postStartSetup for "old-k8s-version-032002" (driver="kvm2")
	I0829 20:26:21.291026   67607 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:21.291046   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.291342   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:21.291366   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.293946   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.294245   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.294273   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.294464   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.294686   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.294840   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.294964   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:21.373592   67607 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:21.377797   67607 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:21.377826   67607 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:21.377892   67607 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:21.377966   67607 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:21.378054   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:21.387886   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:21.413456   67607 start.go:296] duration metric: took 122.429334ms for postStartSetup
	I0829 20:26:21.413497   67607 fix.go:56] duration metric: took 18.810093949s for fixHost
	I0829 20:26:21.413522   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.416095   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.416391   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.416418   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.416594   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.416803   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.416970   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.417115   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.417272   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:21.417474   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:21.417489   67607 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:21.515167   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963181.486447470
	
	I0829 20:26:21.515190   67607 fix.go:216] guest clock: 1724963181.486447470
	I0829 20:26:21.515200   67607 fix.go:229] Guest: 2024-08-29 20:26:21.48644747 +0000 UTC Remote: 2024-08-29 20:26:21.413502498 +0000 UTC m=+222.629982255 (delta=72.944972ms)
	I0829 20:26:21.515225   67607 fix.go:200] guest clock delta is within tolerance: 72.944972ms
	I0829 20:26:21.515232   67607 start.go:83] releasing machines lock for "old-k8s-version-032002", held for 18.911866017s
	I0829 20:26:21.515278   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.515596   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:21.518247   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.518682   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.518710   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.518835   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.519413   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.519589   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.519680   67607 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:21.519736   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.519843   67607 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:21.519869   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.522261   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.522561   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.522614   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.522643   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.522763   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.522919   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.523044   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.523071   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.523073   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.523241   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.523240   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:21.523413   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.523560   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.523712   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:21.599524   67607 ssh_runner.go:195] Run: systemctl --version
	I0829 20:26:21.629122   67607 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:26:21.778437   67607 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:26:21.784642   67607 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:26:21.784714   67607 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:26:21.802019   67607 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:26:21.802043   67607 start.go:495] detecting cgroup driver to use...
	I0829 20:26:21.802100   67607 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:26:21.817407   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:26:21.831514   67607 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:26:21.831578   67607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:26:21.845224   67607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:26:21.858522   67607 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:26:21.972769   67607 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:26:22.115154   67607 docker.go:233] disabling docker service ...
	I0829 20:26:22.115240   67607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:26:22.130015   67607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:26:22.143186   67607 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:26:22.294113   67607 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:26:22.432373   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:26:22.446427   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:26:22.465151   67607 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0829 20:26:22.465218   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.476104   67607 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:26:22.476177   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.486627   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.497782   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.509869   67607 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:26:22.521347   67607 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:26:22.531406   67607 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:26:22.531455   67607 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:26:22.544949   67607 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:26:22.554918   67607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:22.687909   67607 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:26:22.808522   67607 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:26:22.808595   67607 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:26:22.814348   67607 start.go:563] Will wait 60s for crictl version
	I0829 20:26:22.814411   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:22.818348   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:26:22.863797   67607 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:26:22.863883   67607 ssh_runner.go:195] Run: crio --version
	I0829 20:26:22.893173   67607 ssh_runner.go:195] Run: crio --version
	I0829 20:26:22.923146   67607 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0829 20:26:22.924299   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:22.927222   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:22.927564   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:22.927589   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:22.927772   67607 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 20:26:22.932100   67607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:22.945139   67607 kubeadm.go:883] updating cluster {Name:old-k8s-version-032002 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:26:22.945274   67607 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 20:26:22.945334   67607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:22.990592   67607 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 20:26:22.990668   67607 ssh_runner.go:195] Run: which lz4
	I0829 20:26:22.995104   67607 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 20:26:22.999667   67607 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 20:26:22.999703   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0829 20:26:20.524280   66989 addons.go:510] duration metric: took 1.639608208s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0829 20:26:21.135090   66989 node_ready.go:53] node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:23.136839   66989 node_ready.go:53] node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:22.825998   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting to get IP...
	I0829 20:26:22.827278   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:22.827766   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:22.827883   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:22.827750   68757 retry.go:31] will retry after 212.207753ms: waiting for machine to come up
	I0829 20:26:23.041113   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.041553   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.041588   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:23.041508   68757 retry.go:31] will retry after 291.9464ms: waiting for machine to come up
	I0829 20:26:23.335081   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.336072   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.336121   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:23.336041   68757 retry.go:31] will retry after 478.578755ms: waiting for machine to come up
	I0829 20:26:23.816669   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.817178   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.817233   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:23.817087   68757 retry.go:31] will retry after 501.093836ms: waiting for machine to come up
	I0829 20:26:24.319836   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:24.320392   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:24.320418   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:24.320343   68757 retry.go:31] will retry after 524.430407ms: waiting for machine to come up
	I0829 20:26:24.846908   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:24.847388   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:24.847418   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:24.847361   68757 retry.go:31] will retry after 701.573237ms: waiting for machine to come up
	I0829 20:26:25.550328   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:25.550786   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:25.550811   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:25.550727   68757 retry.go:31] will retry after 916.084079ms: waiting for machine to come up
	I0829 20:26:26.468529   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:26.468981   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:26.469012   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:26.468921   68757 retry.go:31] will retry after 1.216322833s: waiting for machine to come up
	I0829 20:26:24.727216   67607 crio.go:462] duration metric: took 1.732148589s to copy over tarball
	I0829 20:26:24.727294   67607 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 20:26:27.715640   67607 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.988318238s)
	I0829 20:26:27.715664   67607 crio.go:469] duration metric: took 2.988419957s to extract the tarball
	I0829 20:26:27.715672   67607 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 20:26:27.764192   67607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:27.797388   67607 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 20:26:27.797422   67607 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 20:26:27.797501   67607 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:27.797536   67607 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0829 20:26:27.797549   67607 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:27.797557   67607 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0829 20:26:27.797511   67607 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:27.797629   67607 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:27.797637   67607 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:27.797519   67607 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:27.799128   67607 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:27.799208   67607 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0829 20:26:27.799251   67607 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0829 20:26:27.799361   67607 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:27.799386   67607 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:27.799463   67607 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:27.799697   67607 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:27.799830   67607 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:27.978022   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:27.978296   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:27.981616   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:27.998987   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.001078   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.004185   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.004672   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0829 20:26:28.103885   67607 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0829 20:26:28.103953   67607 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.104013   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.122203   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:28.129983   67607 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0829 20:26:28.130028   67607 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.130076   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.165427   67607 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0829 20:26:28.165470   67607 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.165521   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.199971   67607 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0829 20:26:28.199990   67607 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0829 20:26:28.200015   67607 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.200021   67607 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.200062   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.200105   67607 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0829 20:26:28.200155   67607 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.200199   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.200204   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.200062   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.200113   67607 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0829 20:26:28.200325   67607 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0829 20:26:28.200356   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.329091   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.329139   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.329187   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.329260   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.329316   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.329362   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:26:28.329316   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.484805   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.484857   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.484888   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.484943   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:26:28.484963   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.485009   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.487351   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.615121   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.615187   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.645371   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.645433   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:26:28.645524   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.645573   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.645638   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0829 20:26:28.729141   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0829 20:26:28.762530   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0829 20:26:28.762592   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0829 20:26:28.782117   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0829 20:26:28.782155   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0829 20:26:28.782195   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0829 20:26:28.782229   67607 cache_images.go:92] duration metric: took 984.791099ms to LoadCachedImages
	W0829 20:26:28.782293   67607 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0829 20:26:28.782310   67607 kubeadm.go:934] updating node { 192.168.39.116 8443 v1.20.0 crio true true} ...
	I0829 20:26:28.782452   67607 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-032002 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:26:28.782518   67607 ssh_runner.go:195] Run: crio config
	I0829 20:26:25.635616   66989 node_ready.go:53] node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:26.635463   66989 node_ready.go:49] node "embed-certs-388383" has status "Ready":"True"
	I0829 20:26:26.635488   66989 node_ready.go:38] duration metric: took 7.504259002s for node "embed-certs-388383" to be "Ready" ...
	I0829 20:26:26.635497   66989 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:26:26.641316   66989 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:26.649602   66989 pod_ready.go:93] pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:26.649634   66989 pod_ready.go:82] duration metric: took 8.284428ms for pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:26.649656   66989 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:28.658281   66989 pod_ready.go:103] pod "etcd-embed-certs-388383" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:27.686642   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:27.687071   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:27.687097   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:27.687030   68757 retry.go:31] will retry after 1.410599528s: waiting for machine to come up
	I0829 20:26:29.099622   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:29.100175   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:29.100207   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:29.100083   68757 retry.go:31] will retry after 1.929618787s: waiting for machine to come up
	I0829 20:26:31.031864   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:31.032434   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:31.032467   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:31.032367   68757 retry.go:31] will retry after 1.926271655s: waiting for machine to come up
	I0829 20:26:28.832785   67607 cni.go:84] Creating CNI manager for ""
	I0829 20:26:28.832807   67607 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:28.832824   67607 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:26:28.832843   67607 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.116 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-032002 NodeName:old-k8s-version-032002 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0829 20:26:28.832982   67607 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-032002"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:26:28.833059   67607 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0829 20:26:28.843483   67607 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:26:28.843566   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:26:28.853276   67607 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0829 20:26:28.870579   67607 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:26:28.888053   67607 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0829 20:26:28.905988   67607 ssh_runner.go:195] Run: grep 192.168.39.116	control-plane.minikube.internal$ /etc/hosts
	I0829 20:26:28.910048   67607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:28.924996   67607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:29.075015   67607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:29.095381   67607 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002 for IP: 192.168.39.116
	I0829 20:26:29.095411   67607 certs.go:194] generating shared ca certs ...
	I0829 20:26:29.095430   67607 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:29.095605   67607 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:26:29.095686   67607 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:26:29.095706   67607 certs.go:256] generating profile certs ...
	I0829 20:26:29.095847   67607 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.key
	I0829 20:26:29.095928   67607 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.key.a1a2aebb
	I0829 20:26:29.095984   67607 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.key
	I0829 20:26:29.096135   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:26:29.096184   67607 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:26:29.096198   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:26:29.096227   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:26:29.096259   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:26:29.096299   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:26:29.096378   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:29.097276   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:26:29.144259   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:26:29.171420   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:26:29.198554   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:26:29.230750   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0829 20:26:29.269978   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 20:26:29.299839   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:26:29.333742   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 20:26:29.358352   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:26:29.382648   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:26:29.406773   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:26:29.434106   67607 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:26:29.451913   67607 ssh_runner.go:195] Run: openssl version
	I0829 20:26:29.457722   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:26:29.469147   67607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:26:29.474048   67607 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:26:29.474094   67607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:26:29.480082   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:26:29.491083   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:26:29.501994   67607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:29.508594   67607 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:29.508643   67607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:29.516331   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:26:29.531067   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:26:29.543998   67607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:26:29.548781   67607 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:26:29.548845   67607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:26:29.555052   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:26:29.567902   67607 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:26:29.572879   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:26:29.579506   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:26:29.585887   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:26:29.592262   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:26:29.598566   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:26:29.604672   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:26:29.610830   67607 kubeadm.go:392] StartCluster: {Name:old-k8s-version-032002 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:26:29.612915   67607 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:26:29.613015   67607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:29.655224   67607 cri.go:89] found id: ""
	I0829 20:26:29.655314   67607 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:26:29.666216   67607 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:26:29.666241   67607 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:26:29.666292   67607 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:26:29.676908   67607 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:26:29.678276   67607 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-032002" does not appear in /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:26:29.679313   67607 kubeconfig.go:62] /home/jenkins/minikube-integration/19530-11185/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-032002" cluster setting kubeconfig missing "old-k8s-version-032002" context setting]
	I0829 20:26:29.680756   67607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:29.764872   67607 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:26:29.776873   67607 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.116
	I0829 20:26:29.776914   67607 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:26:29.776926   67607 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:26:29.776987   67607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:29.819268   67607 cri.go:89] found id: ""
	I0829 20:26:29.819347   67607 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:26:29.840386   67607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:26:29.851624   67607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:26:29.851650   67607 kubeadm.go:157] found existing configuration files:
	
	I0829 20:26:29.851710   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:26:29.861439   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:26:29.861504   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:26:29.871594   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:26:29.881126   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:26:29.881199   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:26:29.890984   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:26:29.900838   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:26:29.900913   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:26:29.910677   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:26:29.920008   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:26:29.920073   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:26:29.929631   67607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:26:29.939864   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:30.096029   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:30.816696   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:31.043310   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:31.139291   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:31.248095   67607 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:26:31.248190   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:31.749101   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:32.248718   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:32.748783   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:33.248254   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:33.748557   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:30.180025   66989 pod_ready.go:93] pod "etcd-embed-certs-388383" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:30.180056   66989 pod_ready.go:82] duration metric: took 3.530390258s for pod "etcd-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:30.180069   66989 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.187272   66989 pod_ready.go:93] pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:32.187300   66989 pod_ready.go:82] duration metric: took 2.007222016s for pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.187313   66989 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.192038   66989 pod_ready.go:93] pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:32.192062   66989 pod_ready.go:82] duration metric: took 4.740656ms for pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.192075   66989 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fcxs4" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.196712   66989 pod_ready.go:93] pod "kube-proxy-fcxs4" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:32.196736   66989 pod_ready.go:82] duration metric: took 4.653538ms for pod "kube-proxy-fcxs4" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.196748   66989 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.200491   66989 pod_ready.go:93] pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:32.200517   66989 pod_ready.go:82] duration metric: took 3.758002ms for pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.200528   66989 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:34.207857   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:32.960872   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:32.961256   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:32.961284   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:32.961208   68757 retry.go:31] will retry after 2.304628323s: waiting for machine to come up
	I0829 20:26:35.267593   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:35.268009   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:35.268041   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:35.267970   68757 retry.go:31] will retry after 3.753063387s: waiting for machine to come up
	I0829 20:26:34.249231   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:34.748279   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:35.249171   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:35.748943   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:36.249181   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:36.748307   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:37.248484   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:37.748261   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:38.248332   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:38.748423   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:36.705814   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:38.708205   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:40.175557   66841 start.go:364] duration metric: took 53.54411059s to acquireMachinesLock for "no-preload-397724"
	I0829 20:26:40.175617   66841 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:26:40.175626   66841 fix.go:54] fixHost starting: 
	I0829 20:26:40.176060   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:40.176098   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:40.193828   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45897
	I0829 20:26:40.194231   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:40.194840   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:26:40.194867   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:40.195175   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:40.195364   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:40.195528   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:26:40.197109   66841 fix.go:112] recreateIfNeeded on no-preload-397724: state=Stopped err=<nil>
	I0829 20:26:40.197128   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	W0829 20:26:40.197278   66841 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:26:40.199263   66841 out.go:177] * Restarting existing kvm2 VM for "no-preload-397724" ...
	I0829 20:26:39.023902   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.024374   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Found IP for machine: 192.168.72.140
	I0829 20:26:39.024399   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has current primary IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.024413   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Reserving static IP address...
	I0829 20:26:39.024832   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Reserved static IP address: 192.168.72.140
	I0829 20:26:39.024856   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for SSH to be available...
	I0829 20:26:39.024894   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-145096", mac: "52:54:00:36:fe:e0", ip: "192.168.72.140"} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.024925   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | skip adding static IP to network mk-default-k8s-diff-port-145096 - found existing host DHCP lease matching {name: "default-k8s-diff-port-145096", mac: "52:54:00:36:fe:e0", ip: "192.168.72.140"}
	I0829 20:26:39.024947   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Getting to WaitForSSH function...
	I0829 20:26:39.026796   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.027100   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.027129   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.027265   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Using SSH client type: external
	I0829 20:26:39.027288   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa (-rw-------)
	I0829 20:26:39.027318   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:39.027333   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | About to run SSH command:
	I0829 20:26:39.027346   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | exit 0
	I0829 20:26:39.146830   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:39.147242   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetConfigRaw
	I0829 20:26:39.147931   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetIP
	I0829 20:26:39.150652   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.151055   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.151084   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.151395   68084 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/config.json ...
	I0829 20:26:39.151581   68084 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:39.151601   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:39.151814   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.153861   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.154189   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.154222   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.154351   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.154575   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.154746   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.154875   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.155010   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:39.155219   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:39.155235   68084 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:39.258973   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:39.259006   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetMachineName
	I0829 20:26:39.259261   68084 buildroot.go:166] provisioning hostname "default-k8s-diff-port-145096"
	I0829 20:26:39.259292   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetMachineName
	I0829 20:26:39.259467   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.262018   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.262472   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.262501   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.262707   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.262886   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.263034   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.263185   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.263344   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:39.263530   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:39.263547   68084 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-145096 && echo "default-k8s-diff-port-145096" | sudo tee /etc/hostname
	I0829 20:26:39.379437   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-145096
	
	I0829 20:26:39.379479   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.382263   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.382682   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.382704   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.382913   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.383128   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.383280   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.383389   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.383520   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:39.383675   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:39.383692   68084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-145096' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-145096/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-145096' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:39.491756   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:39.491790   68084 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:39.491855   68084 buildroot.go:174] setting up certificates
	I0829 20:26:39.491869   68084 provision.go:84] configureAuth start
	I0829 20:26:39.491883   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetMachineName
	I0829 20:26:39.492150   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetIP
	I0829 20:26:39.494882   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.495241   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.495269   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.495452   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.497708   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.497980   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.498013   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.498097   68084 provision.go:143] copyHostCerts
	I0829 20:26:39.498157   68084 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:39.498179   68084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:39.498249   68084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:39.498347   68084 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:39.498356   68084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:39.498377   68084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:39.498430   68084 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:39.498437   68084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:39.498455   68084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:39.498507   68084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-145096 san=[127.0.0.1 192.168.72.140 default-k8s-diff-port-145096 localhost minikube]
	I0829 20:26:39.584313   68084 provision.go:177] copyRemoteCerts
	I0829 20:26:39.584372   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:39.584398   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.587054   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.587377   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.587400   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.587630   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.587823   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.587952   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.588087   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:26:39.664394   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:39.688852   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0829 20:26:39.714653   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 20:26:39.737662   68084 provision.go:87] duration metric: took 245.781265ms to configureAuth
	I0829 20:26:39.737687   68084 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:39.737844   68084 config.go:182] Loaded profile config "default-k8s-diff-port-145096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:26:39.737911   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.740391   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.740659   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.740688   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.740911   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.741107   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.741256   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.741434   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.741612   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:39.741777   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:39.741794   68084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:39.954811   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:39.954846   68084 machine.go:96] duration metric: took 803.251945ms to provisionDockerMachine
	I0829 20:26:39.954862   68084 start.go:293] postStartSetup for "default-k8s-diff-port-145096" (driver="kvm2")
	I0829 20:26:39.954877   68084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:39.954898   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:39.955237   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:39.955267   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.958071   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.958575   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.958605   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.958772   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.958969   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.959126   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.959287   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:26:40.037153   68084 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:40.041150   68084 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:40.041176   68084 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:40.041235   68084 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:40.041325   68084 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:40.041415   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:40.050654   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:40.073789   68084 start.go:296] duration metric: took 118.907407ms for postStartSetup
	I0829 20:26:40.073826   68084 fix.go:56] duration metric: took 18.558388385s for fixHost
	I0829 20:26:40.073846   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:40.076397   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.076749   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:40.076789   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.076999   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:40.077200   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:40.077374   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:40.077480   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:40.077598   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:40.077754   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:40.077765   68084 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:40.175410   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963200.123461148
	
	I0829 20:26:40.175431   68084 fix.go:216] guest clock: 1724963200.123461148
	I0829 20:26:40.175437   68084 fix.go:229] Guest: 2024-08-29 20:26:40.123461148 +0000 UTC Remote: 2024-08-29 20:26:40.073830105 +0000 UTC m=+143.488576066 (delta=49.631043ms)
	I0829 20:26:40.175456   68084 fix.go:200] guest clock delta is within tolerance: 49.631043ms
	I0829 20:26:40.175463   68084 start.go:83] releasing machines lock for "default-k8s-diff-port-145096", held for 18.660059953s
	I0829 20:26:40.175497   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:40.175781   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetIP
	I0829 20:26:40.179031   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.179457   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:40.179495   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.179695   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:40.180256   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:40.180444   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:40.180528   68084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:40.180581   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:40.180706   68084 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:40.180729   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:40.183580   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.183819   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.183963   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:40.183989   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.184172   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:40.184174   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:40.184213   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.184345   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:40.184416   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:40.184511   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:40.184624   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:40.184626   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:40.184794   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:26:40.184896   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:26:40.259854   68084 ssh_runner.go:195] Run: systemctl --version
	I0829 20:26:40.290102   68084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:26:40.439112   68084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:26:40.449465   68084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:26:40.449546   68084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:26:40.471182   68084 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:26:40.471209   68084 start.go:495] detecting cgroup driver to use...
	I0829 20:26:40.471276   68084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:26:40.492605   68084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:26:40.508500   68084 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:26:40.508561   68084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:26:40.527534   68084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:26:40.542013   68084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:26:40.663843   68084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:26:40.837228   68084 docker.go:233] disabling docker service ...
	I0829 20:26:40.837293   68084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:26:40.854285   68084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:26:40.870148   68084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:26:41.017156   68084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:26:41.150436   68084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:26:41.165239   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:26:41.184783   68084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 20:26:41.184847   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.197358   68084 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:26:41.197417   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.211222   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.225297   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.237205   68084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:26:41.249875   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.261928   68084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.286145   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.299119   68084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:26:41.313001   68084 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:26:41.313062   68084 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:26:41.335390   68084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:26:41.348803   68084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:41.464387   68084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:26:41.564675   68084 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:26:41.564746   68084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:26:41.569620   68084 start.go:563] Will wait 60s for crictl version
	I0829 20:26:41.569680   68084 ssh_runner.go:195] Run: which crictl
	I0829 20:26:41.573519   68084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:26:41.615105   68084 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:26:41.615190   68084 ssh_runner.go:195] Run: crio --version
	I0829 20:26:41.644597   68084 ssh_runner.go:195] Run: crio --version
	I0829 20:26:41.678211   68084 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 20:26:39.248306   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:39.748958   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:40.248975   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:40.748948   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:41.249144   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:41.749013   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:42.248363   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:42.748624   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:43.248833   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:43.748535   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:40.200748   66841 main.go:141] libmachine: (no-preload-397724) Calling .Start
	I0829 20:26:40.200955   66841 main.go:141] libmachine: (no-preload-397724) Ensuring networks are active...
	I0829 20:26:40.201793   66841 main.go:141] libmachine: (no-preload-397724) Ensuring network default is active
	I0829 20:26:40.202128   66841 main.go:141] libmachine: (no-preload-397724) Ensuring network mk-no-preload-397724 is active
	I0829 20:26:40.202729   66841 main.go:141] libmachine: (no-preload-397724) Getting domain xml...
	I0829 20:26:40.203538   66841 main.go:141] libmachine: (no-preload-397724) Creating domain...
	I0829 20:26:41.516739   66841 main.go:141] libmachine: (no-preload-397724) Waiting to get IP...
	I0829 20:26:41.517840   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:41.518273   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:41.518353   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:41.518262   68926 retry.go:31] will retry after 295.070588ms: waiting for machine to come up
	I0829 20:26:41.814782   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:41.815346   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:41.815369   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:41.815291   68926 retry.go:31] will retry after 239.48527ms: waiting for machine to come up
	I0829 20:26:42.056957   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:42.057459   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:42.057509   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:42.057436   68926 retry.go:31] will retry after 452.012872ms: waiting for machine to come up
	I0829 20:26:42.511068   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:42.511551   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:42.511590   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:42.511520   68926 retry.go:31] will retry after 552.227159ms: waiting for machine to come up
	I0829 20:26:43.066096   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:43.066642   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:43.066673   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:43.066605   68926 retry.go:31] will retry after 666.699647ms: waiting for machine to come up
	I0829 20:26:43.734695   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:43.735402   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:43.735430   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:43.735309   68926 retry.go:31] will retry after 770.756485ms: waiting for machine to come up
	I0829 20:26:40.709553   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:42.712799   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:41.679441   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetIP
	I0829 20:26:41.682807   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:41.683205   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:41.683236   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:41.683489   68084 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0829 20:26:41.688766   68084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:41.705764   68084 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-145096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:26:41.705918   68084 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:26:41.705977   68084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:41.752884   68084 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 20:26:41.752955   68084 ssh_runner.go:195] Run: which lz4
	I0829 20:26:41.757600   68084 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 20:26:41.762158   68084 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 20:26:41.762188   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 20:26:43.201094   68084 crio.go:462] duration metric: took 1.443534343s to copy over tarball
	I0829 20:26:43.201176   68084 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 20:26:45.400911   68084 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.199703125s)
	I0829 20:26:45.400942   68084 crio.go:469] duration metric: took 2.199820098s to extract the tarball
	I0829 20:26:45.400948   68084 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 20:26:45.439120   68084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:45.482658   68084 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 20:26:45.482679   68084 cache_images.go:84] Images are preloaded, skipping loading
	I0829 20:26:45.482687   68084 kubeadm.go:934] updating node { 192.168.72.140 8444 v1.31.0 crio true true} ...
	I0829 20:26:45.482801   68084 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-145096 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:26:45.482873   68084 ssh_runner.go:195] Run: crio config
	I0829 20:26:45.532108   68084 cni.go:84] Creating CNI manager for ""
	I0829 20:26:45.532132   68084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:45.532146   68084 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:26:45.532169   68084 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.140 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-145096 NodeName:default-k8s-diff-port-145096 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 20:26:45.532310   68084 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.140
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-145096"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:26:45.532367   68084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 20:26:45.542670   68084 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:26:45.542744   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:26:45.552622   68084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0829 20:26:45.569765   68084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:26:45.590972   68084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0829 20:26:45.611421   68084 ssh_runner.go:195] Run: grep 192.168.72.140	control-plane.minikube.internal$ /etc/hosts
	I0829 20:26:45.615585   68084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:45.627911   68084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:45.757504   68084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:45.776103   68084 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096 for IP: 192.168.72.140
	I0829 20:26:45.776128   68084 certs.go:194] generating shared ca certs ...
	I0829 20:26:45.776159   68084 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:45.776337   68084 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:26:45.776388   68084 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:26:45.776400   68084 certs.go:256] generating profile certs ...
	I0829 20:26:45.776511   68084 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/client.key
	I0829 20:26:45.776600   68084 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/apiserver.key.5a49b6b2
	I0829 20:26:45.776650   68084 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/proxy-client.key
	I0829 20:26:45.776788   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:26:45.776827   68084 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:26:45.776840   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:26:45.776869   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:26:45.776940   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:26:45.776977   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:26:45.777035   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:45.777916   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:26:45.823419   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:26:45.868291   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:26:45.905178   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:26:45.934956   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0829 20:26:45.967570   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 20:26:45.994332   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:26:46.019268   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 20:26:46.044075   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:26:46.067906   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:26:46.092513   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:26:46.117686   68084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:26:46.137048   68084 ssh_runner.go:195] Run: openssl version
	I0829 20:26:46.143203   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:26:46.156407   68084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:46.161397   68084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:46.161461   68084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:46.167587   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:26:46.179034   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:26:46.190204   68084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:26:46.194953   68084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:26:46.195010   68084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:26:46.203121   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:26:46.218606   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:26:46.233586   68084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:26:46.240100   68084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:26:46.240155   68084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:26:46.247473   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:26:46.259417   68084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:26:46.264875   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:26:46.270914   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:26:46.277211   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:26:46.283138   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:26:46.289137   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:26:46.295044   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:26:46.301027   68084 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-145096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:26:46.301120   68084 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:26:46.301177   68084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:46.342913   68084 cri.go:89] found id: ""
	I0829 20:26:46.342988   68084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:26:46.354198   68084 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:26:46.354221   68084 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:26:46.354269   68084 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:26:46.364173   68084 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:26:46.365182   68084 kubeconfig.go:125] found "default-k8s-diff-port-145096" server: "https://192.168.72.140:8444"
	I0829 20:26:46.367560   68084 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:26:46.377550   68084 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.140
	I0829 20:26:46.377584   68084 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:26:46.377596   68084 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:26:46.377647   68084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:46.419141   68084 cri.go:89] found id: ""
	I0829 20:26:46.419215   68084 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:26:46.438037   68084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:26:46.449021   68084 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:26:46.449041   68084 kubeadm.go:157] found existing configuration files:
	
	I0829 20:26:46.449093   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0829 20:26:46.459396   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:26:46.459445   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:26:46.469964   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0829 20:26:46.479604   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:26:46.479655   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:26:46.492672   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0829 20:26:46.504656   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:26:46.504714   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:26:46.520206   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0829 20:26:46.532067   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:26:46.532137   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:26:46.541931   68084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:26:46.551973   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:44.248615   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:44.748528   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:45.248257   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:45.748453   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:46.248927   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:46.748628   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:47.248556   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:47.748332   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.248373   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.749111   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:44.507808   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:44.508340   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:44.508375   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:44.508288   68926 retry.go:31] will retry after 754.614285ms: waiting for machine to come up
	I0829 20:26:45.264587   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:45.265039   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:45.265065   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:45.265003   68926 retry.go:31] will retry after 1.3758308s: waiting for machine to come up
	I0829 20:26:46.642139   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:46.642666   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:46.642690   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:46.642612   68926 retry.go:31] will retry after 1.255043608s: waiting for machine to come up
	I0829 20:26:47.899849   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:47.900330   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:47.900360   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:47.900291   68926 retry.go:31] will retry after 1.517293529s: waiting for machine to come up
	I0829 20:26:45.208067   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:48.177040   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:46.668397   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:47.497182   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:47.725573   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:47.785427   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:47.850878   68084 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:26:47.850972   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.351404   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.852023   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.351402   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.367249   68084 api_server.go:72] duration metric: took 1.516370766s to wait for apiserver process to appear ...
	I0829 20:26:49.367283   68084 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:26:49.367312   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:51.595653   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:26:51.595683   68084 api_server.go:103] status: https://192.168.72.140:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:26:51.595698   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:51.609883   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:26:51.609989   68084 api_server.go:103] status: https://192.168.72.140:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:26:51.867454   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:51.872297   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:51.872328   68084 api_server.go:103] status: https://192.168.72.140:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:52.367462   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:52.375300   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:52.375333   68084 api_server.go:103] status: https://192.168.72.140:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:52.867827   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:52.872814   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 200:
	ok
	I0829 20:26:52.881061   68084 api_server.go:141] control plane version: v1.31.0
	I0829 20:26:52.881092   68084 api_server.go:131] duration metric: took 3.513801329s to wait for apiserver health ...
	I0829 20:26:52.881102   68084 cni.go:84] Creating CNI manager for ""
	I0829 20:26:52.881111   68084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:52.882993   68084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:26:49.248291   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.748360   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:50.248427   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:50.749087   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:51.248381   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:51.748488   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:52.249250   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:52.748715   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:53.249248   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:53.748915   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.419781   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:49.420286   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:49.420314   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:49.420244   68926 retry.go:31] will retry after 2.638145598s: waiting for machine to come up
	I0829 20:26:52.059935   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:52.060367   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:52.060411   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:52.060341   68926 retry.go:31] will retry after 2.696474949s: waiting for machine to come up
	I0829 20:26:50.207945   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:52.709407   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:52.884310   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:26:52.901134   68084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:26:52.931390   68084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:26:52.952109   68084 system_pods.go:59] 8 kube-system pods found
	I0829 20:26:52.952154   68084 system_pods.go:61] "coredns-6f6b679f8f-5mkxp" [1d3c3a01-1fa6-4d1d-8750-deef4475ba96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:26:52.952166   68084 system_pods.go:61] "etcd-default-k8s-diff-port-145096" [03096d69-48af-4372-9fa0-5a45dcb9603c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 20:26:52.952177   68084 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-145096" [4be8793a-7934-4c89-a840-49e769673f5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 20:26:52.952188   68084 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-145096" [a3bec7f8-8163-4afa-af53-282ad755b788] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 20:26:52.952202   68084 system_pods.go:61] "kube-proxy-b4ffx" [d97e74d5-21d4-4c96-9d94-77767fc4e609] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 20:26:52.952210   68084 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-145096" [c416b52b-ebf4-4714-bed6-3d25bfaa373c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 20:26:52.952217   68084 system_pods.go:61] "metrics-server-6867b74b74-5kk6q" [e74224b1-8242-4f7f-b8d6-7d9d4839be53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:26:52.952224   68084 system_pods.go:61] "storage-provisioner" [4e97da7c-af4b-40b3-83fb-82b6c2a2adef] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 20:26:52.952236   68084 system_pods.go:74] duration metric: took 20.81979ms to wait for pod list to return data ...
	I0829 20:26:52.952245   68084 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:26:52.961169   68084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:26:52.961202   68084 node_conditions.go:123] node cpu capacity is 2
	I0829 20:26:52.961214   68084 node_conditions.go:105] duration metric: took 8.963546ms to run NodePressure ...
	I0829 20:26:52.961234   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:53.425201   68084 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 20:26:53.429605   68084 kubeadm.go:739] kubelet initialised
	I0829 20:26:53.429625   68084 kubeadm.go:740] duration metric: took 4.401784ms waiting for restarted kubelet to initialise ...
	I0829 20:26:53.429632   68084 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:26:53.434501   68084 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-5mkxp" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:55.442290   68084 pod_ready.go:103] pod "coredns-6f6b679f8f-5mkxp" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:54.248998   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:54.748438   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:55.249066   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:55.749293   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:56.248457   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:56.748509   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:57.248949   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:57.748228   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:58.248717   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:58.748412   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:54.760175   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:54.760689   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:54.760736   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:54.760667   68926 retry.go:31] will retry after 3.651969786s: waiting for machine to come up
	I0829 20:26:58.415601   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.416019   66841 main.go:141] libmachine: (no-preload-397724) Found IP for machine: 192.168.50.214
	I0829 20:26:58.416045   66841 main.go:141] libmachine: (no-preload-397724) Reserving static IP address...
	I0829 20:26:58.416063   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has current primary IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.416507   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "no-preload-397724", mac: "52:54:00:e9:bf:ac", ip: "192.168.50.214"} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.416533   66841 main.go:141] libmachine: (no-preload-397724) DBG | skip adding static IP to network mk-no-preload-397724 - found existing host DHCP lease matching {name: "no-preload-397724", mac: "52:54:00:e9:bf:ac", ip: "192.168.50.214"}
	I0829 20:26:58.416543   66841 main.go:141] libmachine: (no-preload-397724) Reserved static IP address: 192.168.50.214
	I0829 20:26:58.416552   66841 main.go:141] libmachine: (no-preload-397724) Waiting for SSH to be available...
	I0829 20:26:58.416562   66841 main.go:141] libmachine: (no-preload-397724) DBG | Getting to WaitForSSH function...
	I0829 20:26:58.418849   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.419170   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.419199   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.419312   66841 main.go:141] libmachine: (no-preload-397724) DBG | Using SSH client type: external
	I0829 20:26:58.419351   66841 main.go:141] libmachine: (no-preload-397724) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa (-rw-------)
	I0829 20:26:58.419397   66841 main.go:141] libmachine: (no-preload-397724) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:58.419414   66841 main.go:141] libmachine: (no-preload-397724) DBG | About to run SSH command:
	I0829 20:26:58.419444   66841 main.go:141] libmachine: (no-preload-397724) DBG | exit 0
	I0829 20:26:58.542594   66841 main.go:141] libmachine: (no-preload-397724) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:58.542925   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetConfigRaw
	I0829 20:26:58.543582   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetIP
	I0829 20:26:58.546057   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.546384   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.546422   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.546691   66841 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/config.json ...
	I0829 20:26:58.546871   66841 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:58.546890   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:58.547113   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:58.549493   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.549816   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.549854   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.549972   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:58.550140   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.550260   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.550388   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:58.550581   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:58.550805   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:58.550822   66841 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:58.658784   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:58.658827   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:26:58.659063   66841 buildroot.go:166] provisioning hostname "no-preload-397724"
	I0829 20:26:58.659083   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:26:58.659220   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:58.661932   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.662294   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.662320   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.662485   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:58.662695   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.662880   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.663011   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:58.663168   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:58.663343   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:58.663356   66841 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-397724 && echo "no-preload-397724" | sudo tee /etc/hostname
	I0829 20:26:58.790591   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-397724
	
	I0829 20:26:58.790618   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:58.793294   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.793612   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.793639   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.793849   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:58.794035   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.794192   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.794289   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:58.794430   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:58.794656   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:58.794678   66841 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-397724' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-397724/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-397724' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:58.915925   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:58.915958   66841 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:58.915981   66841 buildroot.go:174] setting up certificates
	I0829 20:26:58.915991   66841 provision.go:84] configureAuth start
	I0829 20:26:58.916000   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:26:58.916279   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetIP
	I0829 20:26:58.919034   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.919385   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.919415   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.919523   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:58.921483   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.921805   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.921831   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.922015   66841 provision.go:143] copyHostCerts
	I0829 20:26:58.922062   66841 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:58.922079   66841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:58.922135   66841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:58.922242   66841 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:58.922256   66841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:58.922288   66841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:58.922365   66841 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:58.922375   66841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:58.922400   66841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:58.922491   66841 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.no-preload-397724 san=[127.0.0.1 192.168.50.214 localhost minikube no-preload-397724]
	I0829 20:26:55.206462   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:57.207175   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:59.207454   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:59.264390   66841 provision.go:177] copyRemoteCerts
	I0829 20:26:59.264446   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:59.264467   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.267259   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.267603   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.267626   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.267794   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.268014   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.268190   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.268367   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:26:59.353746   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:59.378289   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0829 20:26:59.402330   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 20:26:59.425412   66841 provision.go:87] duration metric: took 509.408381ms to configureAuth
	I0829 20:26:59.425442   66841 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:59.425616   66841 config.go:182] Loaded profile config "no-preload-397724": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:26:59.425679   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.428148   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.428503   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.428545   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.428698   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.428906   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.429077   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.429227   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.429365   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:59.429511   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:59.429524   66841 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:59.666382   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:59.666408   66841 machine.go:96] duration metric: took 1.11952301s to provisionDockerMachine
	I0829 20:26:59.666422   66841 start.go:293] postStartSetup for "no-preload-397724" (driver="kvm2")
	I0829 20:26:59.666436   66841 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:59.666458   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.666833   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:59.666881   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.669407   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.669725   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.669751   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.669888   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.670073   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.670214   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.670316   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:26:59.753440   66841 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:59.758408   66841 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:59.758431   66841 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:59.758509   66841 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:59.758632   66841 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:59.758753   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:59.768355   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:59.792742   66841 start.go:296] duration metric: took 126.308201ms for postStartSetup
	I0829 20:26:59.792782   66841 fix.go:56] duration metric: took 19.617155195s for fixHost
	I0829 20:26:59.792806   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.795380   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.795744   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.795781   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.795917   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.796124   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.796237   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.796376   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.796488   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:59.796668   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:59.796680   66841 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:59.903539   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963219.868600963
	
	I0829 20:26:59.903564   66841 fix.go:216] guest clock: 1724963219.868600963
	I0829 20:26:59.903574   66841 fix.go:229] Guest: 2024-08-29 20:26:59.868600963 +0000 UTC Remote: 2024-08-29 20:26:59.792787483 +0000 UTC m=+355.719318860 (delta=75.81348ms)
	I0829 20:26:59.903623   66841 fix.go:200] guest clock delta is within tolerance: 75.81348ms
	I0829 20:26:59.903632   66841 start.go:83] releasing machines lock for "no-preload-397724", held for 19.728042303s
	I0829 20:26:59.903676   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.903967   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetIP
	I0829 20:26:59.906798   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.907183   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.907212   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.907378   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.907804   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.907970   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.908038   66841 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:59.908072   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.908324   66841 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:59.908346   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.910843   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.911025   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.911187   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.911215   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.911325   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.911415   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.911437   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.911485   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.911640   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.911649   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.911847   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.911848   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:26:59.911978   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.912119   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:27:00.023116   66841 ssh_runner.go:195] Run: systemctl --version
	I0829 20:27:00.029346   66841 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:27:00.169122   66841 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:27:00.176823   66841 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:27:00.176913   66841 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:27:00.194795   66841 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:27:00.194836   66841 start.go:495] detecting cgroup driver to use...
	I0829 20:27:00.194906   66841 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:27:00.212145   66841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:27:00.226584   66841 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:27:00.226656   66841 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:27:00.240525   66841 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:27:00.256847   66841 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:27:00.371938   66841 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:27:00.516891   66841 docker.go:233] disabling docker service ...
	I0829 20:27:00.516964   66841 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:27:00.531127   66841 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:27:00.543483   66841 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:27:00.672033   66841 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:27:00.794828   66841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:27:00.809204   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:27:00.828484   66841 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 20:27:00.828547   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.839273   66841 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:27:00.839344   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.850336   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.860980   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.871661   66841 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:27:00.884343   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.895190   66841 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.912700   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.923383   66841 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:27:00.934168   66841 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:27:00.934231   66841 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:27:00.948181   66841 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:27:00.959121   66841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:27:01.072055   66841 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:27:01.163024   66841 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:27:01.163104   66841 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:27:01.167949   66841 start.go:563] Will wait 60s for crictl version
	I0829 20:27:01.168011   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.171707   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:27:01.212950   66841 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:27:01.213031   66841 ssh_runner.go:195] Run: crio --version
	I0829 20:27:01.242181   66841 ssh_runner.go:195] Run: crio --version
	I0829 20:27:01.276389   66841 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 20:26:57.441729   68084 pod_ready.go:93] pod "coredns-6f6b679f8f-5mkxp" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:57.441753   68084 pod_ready.go:82] duration metric: took 4.007206558s for pod "coredns-6f6b679f8f-5mkxp" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:57.441762   68084 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:59.448210   68084 pod_ready.go:103] pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:59.248692   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:59.748815   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:00.248257   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:00.748264   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:01.249241   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:01.748894   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:02.249045   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:02.748765   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:03.248902   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:03.748333   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:01.277829   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetIP
	I0829 20:27:01.280762   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:27:01.281144   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:27:01.281171   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:27:01.281367   66841 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0829 20:27:01.285714   66841 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:27:01.297903   66841 kubeadm.go:883] updating cluster {Name:no-preload-397724 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-397724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:27:01.298010   66841 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:27:01.298041   66841 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:27:01.331474   66841 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 20:27:01.331498   66841 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 20:27:01.331566   66841 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:01.331572   66841 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.331609   66841 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.331632   66841 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.331643   66841 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.331615   66841 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0829 20:27:01.331737   66841 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.331758   66841 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.333182   66841 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.333233   66841 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.333206   66841 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.333195   66841 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.333191   66841 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:01.333278   66841 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.333191   66841 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.333333   66841 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0829 20:27:01.507028   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.514096   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.526653   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.530292   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.531828   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.534432   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.550465   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0829 20:27:01.613161   66841 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0829 20:27:01.613209   66841 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.613287   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.631193   66841 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0829 20:27:01.631236   66841 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.631285   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.687868   66841 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0829 20:27:01.687911   66841 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.687967   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.700369   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:01.713036   66841 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0829 20:27:01.713102   66841 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.713159   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.722934   66841 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0829 20:27:01.722991   66841 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.723042   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.722941   66841 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0829 20:27:01.723130   66841 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.723159   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.785242   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.785246   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.785342   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.785391   66841 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0829 20:27:01.785438   66841 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:01.785450   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.785474   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.785479   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.785534   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.925322   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.925371   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.925374   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.925474   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.925518   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.925569   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.925593   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:02.072628   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:02.072690   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:02.072744   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:02.072822   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:02.072867   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:02.176999   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0829 20:27:02.177031   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:02.177503   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:02.177507   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 20:27:02.177572   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0829 20:27:02.177581   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0829 20:27:02.177678   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0829 20:27:02.177682   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 20:27:02.185515   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0829 20:27:02.185585   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:02.185624   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0829 20:27:02.259015   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0829 20:27:02.259076   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0829 20:27:02.259087   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0829 20:27:02.259106   66841 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 20:27:02.259113   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0829 20:27:02.259138   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0829 20:27:02.259147   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 20:27:02.259155   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 20:27:02.259152   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0829 20:27:02.259139   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0829 20:27:02.259157   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 20:27:02.259240   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0829 20:27:01.208076   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:03.208339   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:01.954153   68084 pod_ready.go:103] pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:03.454991   68084 pod_ready.go:93] pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:03.455023   68084 pod_ready.go:82] duration metric: took 6.013253793s for pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:03.455036   68084 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:05.461938   68084 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:04.249082   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:04.748738   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:05.248398   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:05.749056   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:06.248693   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:06.748904   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:07.249145   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:07.749131   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:08.248774   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:08.748444   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:04.630344   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.371149915s)
	I0829 20:27:04.630373   66841 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0: (2.371188324s)
	I0829 20:27:04.630410   66841 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.371191825s)
	I0829 20:27:04.630432   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0829 20:27:04.630413   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0829 20:27:04.630379   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0829 20:27:04.630465   66841 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.371187188s)
	I0829 20:27:04.630478   66841 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 20:27:04.630481   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0829 20:27:04.630561   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 20:27:06.684986   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.054398317s)
	I0829 20:27:06.685019   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0829 20:27:06.685047   66841 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0829 20:27:06.685098   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0829 20:27:05.707657   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:07.708034   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:06.965873   68084 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:06.965904   68084 pod_ready.go:82] duration metric: took 3.51085868s for pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.965918   68084 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.976464   68084 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:06.976489   68084 pod_ready.go:82] duration metric: took 10.562771ms for pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.976502   68084 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b4ffx" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.982178   68084 pod_ready.go:93] pod "kube-proxy-b4ffx" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:06.982197   68084 pod_ready.go:82] duration metric: took 5.687889ms for pod "kube-proxy-b4ffx" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.982205   68084 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.987316   68084 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:06.987333   68084 pod_ready.go:82] duration metric: took 5.122275ms for pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.987342   68084 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:08.994794   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:11.493940   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:09.248746   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:09.748722   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:10.249074   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:10.748647   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:11.248236   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:11.749057   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:12.249227   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:12.748688   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:13.249248   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:13.749298   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:10.365120   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.679993065s)
	I0829 20:27:10.365150   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0829 20:27:10.365182   66841 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0829 20:27:10.365256   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0829 20:27:12.122371   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.757087653s)
	I0829 20:27:12.122409   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0829 20:27:12.122434   66841 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 20:27:12.122564   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 20:27:13.575108   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.45251018s)
	I0829 20:27:13.575137   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0829 20:27:13.575165   66841 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 20:27:13.575210   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 20:27:09.708364   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:11.708491   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:14.207383   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:13.494124   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:15.993564   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:14.249254   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:14.748957   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:15.249229   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:15.749137   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:16.248967   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:16.748254   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:17.248929   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:17.748339   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:18.248666   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:18.748712   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:15.742286   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.16705417s)
	I0829 20:27:15.742320   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0829 20:27:15.742348   66841 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0829 20:27:15.742398   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0829 20:27:16.391977   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0829 20:27:16.392017   66841 cache_images.go:123] Successfully loaded all cached images
	I0829 20:27:16.392022   66841 cache_images.go:92] duration metric: took 15.060512795s to LoadCachedImages
	I0829 20:27:16.392034   66841 kubeadm.go:934] updating node { 192.168.50.214 8443 v1.31.0 crio true true} ...
	I0829 20:27:16.392139   66841 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-397724 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-397724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:27:16.392203   66841 ssh_runner.go:195] Run: crio config
	I0829 20:27:16.445382   66841 cni.go:84] Creating CNI manager for ""
	I0829 20:27:16.445406   66841 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:27:16.445420   66841 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:27:16.445448   66841 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.214 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-397724 NodeName:no-preload-397724 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 20:27:16.445612   66841 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-397724"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:27:16.445671   66841 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 20:27:16.456505   66841 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:27:16.456560   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:27:16.467361   66841 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0829 20:27:16.484700   66841 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:27:16.503026   66841 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0829 20:27:16.519867   66841 ssh_runner.go:195] Run: grep 192.168.50.214	control-plane.minikube.internal$ /etc/hosts
	I0829 20:27:16.523648   66841 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:27:16.535642   66841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:27:16.671027   66841 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:27:16.688692   66841 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724 for IP: 192.168.50.214
	I0829 20:27:16.688712   66841 certs.go:194] generating shared ca certs ...
	I0829 20:27:16.688727   66841 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:27:16.688883   66841 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:27:16.688944   66841 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:27:16.688957   66841 certs.go:256] generating profile certs ...
	I0829 20:27:16.689053   66841 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/client.key
	I0829 20:27:16.689132   66841 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/apiserver.key.1f535ae9
	I0829 20:27:16.689182   66841 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/proxy-client.key
	I0829 20:27:16.689360   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:27:16.689400   66841 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:27:16.689415   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:27:16.689450   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:27:16.689504   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:27:16.689540   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:27:16.689596   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:27:16.690277   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:27:16.747582   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:27:16.782064   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:27:16.816382   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:27:16.851548   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0829 20:27:16.882919   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 20:27:16.907439   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:27:16.932392   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 20:27:16.957451   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:27:16.982482   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:27:17.006032   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:27:17.030052   66841 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:27:17.047792   66841 ssh_runner.go:195] Run: openssl version
	I0829 20:27:17.053922   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:27:17.065219   66841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:27:17.069592   66841 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:27:17.069647   66841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:27:17.075853   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:27:17.086727   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:27:17.097935   66841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:27:17.102198   66841 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:27:17.102252   66841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:27:17.108031   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:27:17.119868   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:27:17.131513   66841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:27:17.136434   66841 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:27:17.136497   66841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:27:17.142219   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:27:17.153448   66841 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:27:17.158375   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:27:17.165156   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:27:17.170927   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:27:17.176669   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:27:17.182293   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:27:17.187936   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:27:17.193572   66841 kubeadm.go:392] StartCluster: {Name:no-preload-397724 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-397724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:27:17.193682   66841 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:27:17.193754   66841 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:27:17.238327   66841 cri.go:89] found id: ""
	I0829 20:27:17.238392   66841 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:27:17.248923   66841 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:27:17.248943   66841 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:27:17.248984   66841 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:27:17.263143   66841 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:27:17.264260   66841 kubeconfig.go:125] found "no-preload-397724" server: "https://192.168.50.214:8443"
	I0829 20:27:17.266448   66841 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:27:17.276347   66841 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.214
	I0829 20:27:17.276378   66841 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:27:17.276389   66841 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:27:17.276440   66841 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:27:17.311409   66841 cri.go:89] found id: ""
	I0829 20:27:17.311476   66841 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:27:17.329204   66841 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:27:17.339063   66841 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:27:17.339079   66841 kubeadm.go:157] found existing configuration files:
	
	I0829 20:27:17.339118   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:27:17.348268   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:27:17.348324   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:27:17.357596   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:27:17.366504   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:27:17.366575   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:27:17.376068   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:27:17.385156   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:27:17.385220   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:27:17.394890   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:27:17.404213   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:27:17.404283   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:27:17.413669   66841 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:27:17.423307   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:17.536003   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:17.990605   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:18.217809   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:18.297100   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:18.421185   66841 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:27:18.421283   66841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:18.922043   66841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:16.209618   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:18.707544   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:17.993609   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:19.994469   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:19.248924   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:19.748958   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:20.248851   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:20.748547   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:21.248298   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:21.748802   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:22.248680   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:22.748271   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:23.248491   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:23.748803   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:19.422030   66841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:19.442023   66841 api_server.go:72] duration metric: took 1.020839747s to wait for apiserver process to appear ...
	I0829 20:27:19.442047   66841 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:27:19.442070   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:22.444156   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:27:22.444192   66841 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:27:22.444211   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:22.466228   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:27:22.466258   66841 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:27:22.942835   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:22.949338   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:27:22.949360   66841 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:27:23.443069   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:23.447845   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:27:23.447876   66841 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:27:23.942372   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:23.946517   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 200:
	ok
	I0829 20:27:23.953497   66841 api_server.go:141] control plane version: v1.31.0
	I0829 20:27:23.953522   66841 api_server.go:131] duration metric: took 4.511467637s to wait for apiserver health ...
	I0829 20:27:23.953530   66841 cni.go:84] Creating CNI manager for ""
	I0829 20:27:23.953536   66841 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:27:23.955180   66841 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:27:23.956396   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:27:23.969429   66841 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:27:24.000989   66841 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:27:24.014200   66841 system_pods.go:59] 8 kube-system pods found
	I0829 20:27:24.014233   66841 system_pods.go:61] "coredns-6f6b679f8f-g7xxs" [f0148527-2146-4153-aa20-5ac97b664027] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:27:24.014240   66841 system_pods.go:61] "etcd-no-preload-397724" [f04b5ee4-f439-470a-b298-1a9ed569db70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 20:27:24.014248   66841 system_pods.go:61] "kube-apiserver-no-preload-397724" [2328f327-1744-4785-9266-3f992b977ef8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 20:27:24.014254   66841 system_pods.go:61] "kube-controller-manager-no-preload-397724" [0e63f04d-8627-45e9-ac80-70a0fe63f5db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 20:27:24.014260   66841 system_pods.go:61] "kube-proxy-57kbt" [9f85ce17-85a0-4a52-bdaf-4e3aee4d1a98] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 20:27:24.014267   66841 system_pods.go:61] "kube-scheduler-no-preload-397724" [106821c6-2444-470a-bac1-78838c0b1982] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 20:27:24.014273   66841 system_pods.go:61] "metrics-server-6867b74b74-668dg" [e3f3ab24-7777-40b0-a54c-00a294e7e68e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:27:24.014280   66841 system_pods.go:61] "storage-provisioner" [146bd02a-8f50-4d19-a188-4adc2bcc0a43] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 20:27:24.014288   66841 system_pods.go:74] duration metric: took 13.275941ms to wait for pod list to return data ...
	I0829 20:27:24.014298   66841 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:27:24.018932   66841 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:27:24.018956   66841 node_conditions.go:123] node cpu capacity is 2
	I0829 20:27:24.018966   66841 node_conditions.go:105] duration metric: took 4.661993ms to run NodePressure ...
	I0829 20:27:24.018981   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:21.207144   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:23.208728   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:22.493988   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:24.494152   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:24.248456   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:24.748347   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:25.248337   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:25.748905   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:26.248912   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:26.749302   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:27.249058   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:27.749105   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:28.248548   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:28.748298   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:24.305237   66841 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 20:27:24.310640   66841 kubeadm.go:739] kubelet initialised
	I0829 20:27:24.310666   66841 kubeadm.go:740] duration metric: took 5.402212ms waiting for restarted kubelet to initialise ...
	I0829 20:27:24.310679   66841 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:27:24.316568   66841 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:26.325035   66841 pod_ready.go:103] pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:28.336627   66841 pod_ready.go:103] pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:25.706496   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:27.708228   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:26.992949   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:28.993682   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:30.993877   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:29.248994   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:29.749020   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:30.248983   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:30.748247   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:31.249052   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:31.249133   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:31.293442   67607 cri.go:89] found id: ""
	I0829 20:27:31.293466   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.293473   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:31.293479   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:31.293527   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:31.333976   67607 cri.go:89] found id: ""
	I0829 20:27:31.333999   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.334006   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:31.334011   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:31.334055   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:31.373680   67607 cri.go:89] found id: ""
	I0829 20:27:31.373707   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.373715   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:31.373720   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:31.373766   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:31.407798   67607 cri.go:89] found id: ""
	I0829 20:27:31.407824   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.407832   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:31.407837   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:31.407893   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:31.444409   67607 cri.go:89] found id: ""
	I0829 20:27:31.444437   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.444445   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:31.444451   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:31.444512   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:31.479313   67607 cri.go:89] found id: ""
	I0829 20:27:31.479333   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.479341   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:31.479347   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:31.479403   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:31.516056   67607 cri.go:89] found id: ""
	I0829 20:27:31.516089   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.516100   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:31.516108   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:31.516168   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:31.555324   67607 cri.go:89] found id: ""
	I0829 20:27:31.555349   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.555357   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:31.555365   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:31.555375   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:31.626397   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:31.626434   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:31.672006   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:31.672038   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:31.724691   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:31.724727   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:31.740283   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:31.740324   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:31.874007   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:29.824509   66841 pod_ready.go:93] pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:29.824530   66841 pod_ready.go:82] duration metric: took 5.507939145s for pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:29.824547   66841 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:31.833646   66841 pod_ready.go:103] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:30.207213   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:32.706352   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:32.993932   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:35.494511   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:34.374203   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:34.387817   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:34.387888   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:34.423254   67607 cri.go:89] found id: ""
	I0829 20:27:34.423279   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.423286   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:34.423296   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:34.423343   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:34.457741   67607 cri.go:89] found id: ""
	I0829 20:27:34.457768   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.457775   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:34.457781   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:34.457827   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:34.498432   67607 cri.go:89] found id: ""
	I0829 20:27:34.498457   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.498464   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:34.498469   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:34.498523   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:34.534290   67607 cri.go:89] found id: ""
	I0829 20:27:34.534317   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.534324   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:34.534330   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:34.534380   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:34.570878   67607 cri.go:89] found id: ""
	I0829 20:27:34.570909   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.570919   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:34.570928   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:34.570986   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:34.615735   67607 cri.go:89] found id: ""
	I0829 20:27:34.615762   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.615769   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:34.615775   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:34.615824   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:34.656667   67607 cri.go:89] found id: ""
	I0829 20:27:34.656706   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.656721   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:34.656730   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:34.656779   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:34.708906   67607 cri.go:89] found id: ""
	I0829 20:27:34.708928   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.708937   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:34.708947   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:34.708962   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:34.767382   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:34.767417   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:34.786523   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:34.786574   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:34.872832   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:34.872857   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:34.872871   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:34.954581   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:34.954620   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:37.497810   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:37.511479   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:37.511539   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:37.547930   67607 cri.go:89] found id: ""
	I0829 20:27:37.547962   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.547972   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:37.547980   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:37.548035   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:37.585281   67607 cri.go:89] found id: ""
	I0829 20:27:37.585304   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.585312   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:37.585318   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:37.585365   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:37.622201   67607 cri.go:89] found id: ""
	I0829 20:27:37.622229   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.622241   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:37.622246   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:37.622295   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:37.657248   67607 cri.go:89] found id: ""
	I0829 20:27:37.657274   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.657281   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:37.657289   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:37.657335   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:37.691674   67607 cri.go:89] found id: ""
	I0829 20:27:37.691703   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.691711   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:37.691716   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:37.691764   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:37.729523   67607 cri.go:89] found id: ""
	I0829 20:27:37.729548   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.729557   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:37.729562   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:37.729609   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:37.764601   67607 cri.go:89] found id: ""
	I0829 20:27:37.764629   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.764637   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:37.764643   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:37.764705   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:37.799228   67607 cri.go:89] found id: ""
	I0829 20:27:37.799259   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.799270   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:37.799281   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:37.799301   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:37.848128   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:37.848158   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:37.862610   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:37.862640   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:37.936859   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:37.936888   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:37.936903   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:38.013647   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:38.013681   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:34.331889   66841 pod_ready.go:103] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:36.332334   66841 pod_ready.go:103] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:37.329545   66841 pod_ready.go:93] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.329566   66841 pod_ready.go:82] duration metric: took 7.50501178s for pod "etcd-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.329576   66841 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.333442   66841 pod_ready.go:93] pod "kube-apiserver-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.333458   66841 pod_ready.go:82] duration metric: took 3.876755ms for pod "kube-apiserver-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.333467   66841 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.336952   66841 pod_ready.go:93] pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.336968   66841 pod_ready.go:82] duration metric: took 3.49531ms for pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.336976   66841 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-57kbt" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.340368   66841 pod_ready.go:93] pod "kube-proxy-57kbt" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.340383   66841 pod_ready.go:82] duration metric: took 3.401844ms for pod "kube-proxy-57kbt" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.340396   66841 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.344111   66841 pod_ready.go:93] pod "kube-scheduler-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.344125   66841 pod_ready.go:82] duration metric: took 3.723924ms for pod "kube-scheduler-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.344132   66841 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:34.708682   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:37.206876   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:37.997827   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:40.494840   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:40.551395   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:40.568100   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:40.568181   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:40.616582   67607 cri.go:89] found id: ""
	I0829 20:27:40.616611   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.616623   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:40.616631   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:40.616695   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:40.690580   67607 cri.go:89] found id: ""
	I0829 20:27:40.690620   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.690631   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:40.690638   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:40.690695   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:40.733624   67607 cri.go:89] found id: ""
	I0829 20:27:40.733653   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.733662   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:40.733670   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:40.733733   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:40.767499   67607 cri.go:89] found id: ""
	I0829 20:27:40.767528   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.767538   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:40.767546   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:40.767619   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:40.806973   67607 cri.go:89] found id: ""
	I0829 20:27:40.807002   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.807009   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:40.807015   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:40.807079   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:40.842311   67607 cri.go:89] found id: ""
	I0829 20:27:40.842334   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.842341   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:40.842347   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:40.842401   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:40.880208   67607 cri.go:89] found id: ""
	I0829 20:27:40.880238   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.880248   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:40.880255   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:40.880309   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:40.918395   67607 cri.go:89] found id: ""
	I0829 20:27:40.918424   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.918435   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:40.918445   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:40.918459   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:40.972396   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:40.972437   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:40.986136   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:40.986169   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:41.064600   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:41.064623   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:41.064634   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:41.146653   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:41.146687   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:43.687773   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:43.701576   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:43.701645   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:43.737259   67607 cri.go:89] found id: ""
	I0829 20:27:43.737282   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.737289   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:43.737299   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:43.737346   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:43.772678   67607 cri.go:89] found id: ""
	I0829 20:27:43.772702   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.772709   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:43.772714   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:43.772776   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:43.806788   67607 cri.go:89] found id: ""
	I0829 20:27:43.806821   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.806831   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:43.806839   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:43.806900   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:39.350484   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:41.352279   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:43.850564   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:39.707977   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:42.207630   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:42.993571   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:44.994696   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:43.841738   67607 cri.go:89] found id: ""
	I0829 20:27:43.841759   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.841767   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:43.841772   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:43.841829   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:43.878420   67607 cri.go:89] found id: ""
	I0829 20:27:43.878449   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.878459   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:43.878466   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:43.878527   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:43.914307   67607 cri.go:89] found id: ""
	I0829 20:27:43.914335   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.914345   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:43.914352   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:43.914413   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:43.958827   67607 cri.go:89] found id: ""
	I0829 20:27:43.958853   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.958865   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:43.958871   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:43.958935   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:43.997397   67607 cri.go:89] found id: ""
	I0829 20:27:43.997423   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.997432   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:43.997442   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:43.997455   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:44.049245   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:44.049280   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:44.063473   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:44.063511   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:44.131628   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:44.131651   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:44.131666   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:44.210826   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:44.210854   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:46.754905   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:46.769531   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:46.769588   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:46.805245   67607 cri.go:89] found id: ""
	I0829 20:27:46.805272   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.805280   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:46.805285   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:46.805338   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:46.843606   67607 cri.go:89] found id: ""
	I0829 20:27:46.843637   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.843646   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:46.843654   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:46.843710   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:46.880300   67607 cri.go:89] found id: ""
	I0829 20:27:46.880326   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.880333   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:46.880338   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:46.880387   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:46.923537   67607 cri.go:89] found id: ""
	I0829 20:27:46.923562   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.923569   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:46.923574   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:46.923620   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:46.957774   67607 cri.go:89] found id: ""
	I0829 20:27:46.957806   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.957817   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:46.957826   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:46.957887   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:46.996972   67607 cri.go:89] found id: ""
	I0829 20:27:46.996995   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.997005   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:46.997013   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:46.997056   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:47.030560   67607 cri.go:89] found id: ""
	I0829 20:27:47.030588   67607 logs.go:276] 0 containers: []
	W0829 20:27:47.030606   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:47.030612   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:47.030665   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:47.068654   67607 cri.go:89] found id: ""
	I0829 20:27:47.068678   67607 logs.go:276] 0 containers: []
	W0829 20:27:47.068686   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:47.068694   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:47.068706   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:47.082335   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:47.082367   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:47.162792   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:47.162817   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:47.162829   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:47.241456   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:47.241491   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:47.282249   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:47.282274   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:45.850673   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:47.850836   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:44.707198   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:46.707222   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:49.207556   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:46.995302   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:49.498812   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:49.836268   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:49.850415   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:49.850491   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:49.887816   67607 cri.go:89] found id: ""
	I0829 20:27:49.887843   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.887851   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:49.887856   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:49.887916   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:49.923701   67607 cri.go:89] found id: ""
	I0829 20:27:49.923735   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.923745   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:49.923755   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:49.923818   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:49.958197   67607 cri.go:89] found id: ""
	I0829 20:27:49.958225   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.958236   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:49.958244   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:49.958313   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:49.995333   67607 cri.go:89] found id: ""
	I0829 20:27:49.995361   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.995373   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:49.995380   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:49.995439   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:50.034345   67607 cri.go:89] found id: ""
	I0829 20:27:50.034375   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.034382   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:50.034387   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:50.034438   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:50.070324   67607 cri.go:89] found id: ""
	I0829 20:27:50.070355   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.070365   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:50.070374   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:50.070434   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:50.107301   67607 cri.go:89] found id: ""
	I0829 20:27:50.107326   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.107334   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:50.107340   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:50.107400   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:50.144748   67607 cri.go:89] found id: ""
	I0829 20:27:50.144778   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.144788   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:50.144800   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:50.144816   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:50.183576   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:50.183606   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:50.236716   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:50.236750   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:50.251589   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:50.251612   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:50.317816   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:50.317840   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:50.317855   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:52.894572   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:52.908081   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:52.908149   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:52.945272   67607 cri.go:89] found id: ""
	I0829 20:27:52.945299   67607 logs.go:276] 0 containers: []
	W0829 20:27:52.945309   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:52.945317   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:52.945377   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:52.980237   67607 cri.go:89] found id: ""
	I0829 20:27:52.980262   67607 logs.go:276] 0 containers: []
	W0829 20:27:52.980270   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:52.980275   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:52.980325   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:53.017894   67607 cri.go:89] found id: ""
	I0829 20:27:53.017922   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.017929   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:53.017935   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:53.017991   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:53.052577   67607 cri.go:89] found id: ""
	I0829 20:27:53.052603   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.052611   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:53.052616   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:53.052667   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:53.093414   67607 cri.go:89] found id: ""
	I0829 20:27:53.093444   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.093455   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:53.093462   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:53.093523   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:53.130794   67607 cri.go:89] found id: ""
	I0829 20:27:53.130825   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.130837   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:53.130845   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:53.130902   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:53.163793   67607 cri.go:89] found id: ""
	I0829 20:27:53.163819   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.163827   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:53.163832   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:53.163882   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:53.204824   67607 cri.go:89] found id: ""
	I0829 20:27:53.204852   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.204862   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:53.204872   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:53.204885   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:53.243411   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:53.243440   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:53.296611   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:53.296642   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:53.310909   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:53.310943   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:53.385768   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:53.385790   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:53.385801   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:49.851712   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:52.350295   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:51.711115   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:54.207340   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:51.993943   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:53.996334   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:56.494226   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:55.966801   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:55.980852   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:55.980933   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:56.017682   67607 cri.go:89] found id: ""
	I0829 20:27:56.017707   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.017716   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:56.017722   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:56.017767   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:56.051556   67607 cri.go:89] found id: ""
	I0829 20:27:56.051584   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.051594   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:56.051600   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:56.051665   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:56.095301   67607 cri.go:89] found id: ""
	I0829 20:27:56.095330   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.095340   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:56.095348   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:56.095408   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:56.131161   67607 cri.go:89] found id: ""
	I0829 20:27:56.131195   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.131205   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:56.131213   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:56.131269   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:56.166611   67607 cri.go:89] found id: ""
	I0829 20:27:56.166637   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.166645   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:56.166651   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:56.166713   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:56.202818   67607 cri.go:89] found id: ""
	I0829 20:27:56.202846   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.202856   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:56.202864   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:56.202923   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:56.237855   67607 cri.go:89] found id: ""
	I0829 20:27:56.237883   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.237891   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:56.237897   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:56.237955   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:56.272402   67607 cri.go:89] found id: ""
	I0829 20:27:56.272426   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.272433   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:56.272441   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:56.272452   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:56.351628   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:56.351653   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:56.389525   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:56.389559   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:56.444952   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:56.444989   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:56.459731   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:56.459759   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:56.536888   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:54.350358   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:56.350727   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:58.352884   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:56.208050   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:58.706897   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:58.993153   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:00.993544   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:59.037744   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:59.051868   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:59.051938   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:59.087436   67607 cri.go:89] found id: ""
	I0829 20:27:59.087461   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.087467   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:59.087474   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:59.087531   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:59.123729   67607 cri.go:89] found id: ""
	I0829 20:27:59.123757   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.123765   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:59.123771   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:59.123825   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:59.168649   67607 cri.go:89] found id: ""
	I0829 20:27:59.168682   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.168690   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:59.168696   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:59.168753   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:59.209770   67607 cri.go:89] found id: ""
	I0829 20:27:59.209791   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.209803   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:59.209808   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:59.209854   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:59.248358   67607 cri.go:89] found id: ""
	I0829 20:27:59.248384   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.248392   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:59.248398   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:59.248445   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:59.281770   67607 cri.go:89] found id: ""
	I0829 20:27:59.281797   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.281805   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:59.281811   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:59.281870   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:59.317255   67607 cri.go:89] found id: ""
	I0829 20:27:59.317285   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.317295   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:59.317302   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:59.317363   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:59.354301   67607 cri.go:89] found id: ""
	I0829 20:27:59.354324   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.354332   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:59.354339   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:59.354352   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:59.438346   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:59.438382   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:59.482482   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:59.482513   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:59.540926   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:59.540961   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:59.555221   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:59.555258   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:59.622114   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:02.123276   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:02.137435   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:02.137502   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:02.176310   67607 cri.go:89] found id: ""
	I0829 20:28:02.176340   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.176347   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:02.176355   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:02.176414   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:02.216511   67607 cri.go:89] found id: ""
	I0829 20:28:02.216555   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.216562   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:02.216574   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:02.216625   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:02.260116   67607 cri.go:89] found id: ""
	I0829 20:28:02.260149   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.260158   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:02.260164   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:02.260225   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:02.301550   67607 cri.go:89] found id: ""
	I0829 20:28:02.301584   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.301600   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:02.301608   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:02.301692   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:02.335916   67607 cri.go:89] found id: ""
	I0829 20:28:02.335948   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.335959   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:02.335967   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:02.336033   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:02.372479   67607 cri.go:89] found id: ""
	I0829 20:28:02.372507   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.372515   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:02.372522   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:02.372584   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:02.406683   67607 cri.go:89] found id: ""
	I0829 20:28:02.406713   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.406721   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:02.406727   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:02.406774   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:02.443130   67607 cri.go:89] found id: ""
	I0829 20:28:02.443156   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.443164   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:02.443173   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:02.443185   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:02.485747   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:02.485777   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:02.540106   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:02.540143   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:02.556158   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:02.556188   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:02.637870   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:02.637900   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:02.637915   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:00.851416   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:03.351248   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:00.707716   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:02.708204   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:02.994108   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:04.994988   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:05.220330   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:05.233932   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:05.233994   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:05.269046   67607 cri.go:89] found id: ""
	I0829 20:28:05.269072   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.269081   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:05.269087   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:05.269134   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:05.303963   67607 cri.go:89] found id: ""
	I0829 20:28:05.303989   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.303999   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:05.304006   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:05.304065   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:05.340943   67607 cri.go:89] found id: ""
	I0829 20:28:05.340975   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.340985   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:05.340992   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:05.341061   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:05.379551   67607 cri.go:89] found id: ""
	I0829 20:28:05.379582   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.379593   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:05.379601   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:05.379659   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:05.414229   67607 cri.go:89] found id: ""
	I0829 20:28:05.414256   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.414267   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:05.414274   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:05.414339   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:05.450212   67607 cri.go:89] found id: ""
	I0829 20:28:05.450241   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.450251   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:05.450258   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:05.450318   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:05.487415   67607 cri.go:89] found id: ""
	I0829 20:28:05.487451   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.487463   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:05.487470   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:05.487529   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:05.521347   67607 cri.go:89] found id: ""
	I0829 20:28:05.521370   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.521383   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:05.521390   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:05.521402   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:05.572317   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:05.572350   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:05.585651   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:05.585680   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:05.653929   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:05.653950   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:05.653969   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:05.732843   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:05.732873   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:08.281983   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:08.295104   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:08.295166   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:08.328570   67607 cri.go:89] found id: ""
	I0829 20:28:08.328596   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.328605   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:08.328613   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:08.328684   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:08.363567   67607 cri.go:89] found id: ""
	I0829 20:28:08.363595   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.363605   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:08.363613   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:08.363672   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:08.399619   67607 cri.go:89] found id: ""
	I0829 20:28:08.399645   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.399653   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:08.399659   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:08.399707   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:08.439252   67607 cri.go:89] found id: ""
	I0829 20:28:08.439283   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.439294   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:08.439301   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:08.439357   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:08.477730   67607 cri.go:89] found id: ""
	I0829 20:28:08.477754   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.477762   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:08.477768   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:08.477834   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:08.522045   67607 cri.go:89] found id: ""
	I0829 20:28:08.522066   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.522073   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:08.522079   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:08.522137   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:08.560400   67607 cri.go:89] found id: ""
	I0829 20:28:08.560427   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.560434   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:08.560441   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:08.560504   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:08.599111   67607 cri.go:89] found id: ""
	I0829 20:28:08.599140   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.599150   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:08.599161   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:08.599175   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:08.681451   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:08.681487   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:08.722800   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:08.722835   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:08.779058   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:08.779089   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:08.796940   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:08.796963   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 20:28:05.852245   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:08.351402   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:04.708669   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:07.207124   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:07.493431   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:09.493794   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	W0829 20:28:08.868296   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:11.369316   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:11.384150   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:11.384225   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:11.418452   67607 cri.go:89] found id: ""
	I0829 20:28:11.418480   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.418488   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:11.418494   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:11.418555   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:11.451359   67607 cri.go:89] found id: ""
	I0829 20:28:11.451389   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.451400   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:11.451408   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:11.451481   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:11.488408   67607 cri.go:89] found id: ""
	I0829 20:28:11.488436   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.488446   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:11.488453   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:11.488510   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:11.528311   67607 cri.go:89] found id: ""
	I0829 20:28:11.528340   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.528351   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:11.528359   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:11.528412   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:11.571345   67607 cri.go:89] found id: ""
	I0829 20:28:11.571372   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.571382   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:11.571389   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:11.571454   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:11.606812   67607 cri.go:89] found id: ""
	I0829 20:28:11.606839   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.606850   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:11.606857   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:11.606918   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:11.652687   67607 cri.go:89] found id: ""
	I0829 20:28:11.652710   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.652717   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:11.652722   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:11.652781   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:11.687583   67607 cri.go:89] found id: ""
	I0829 20:28:11.687628   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.687645   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:11.687655   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:11.687673   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:11.727052   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:11.727086   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:11.779116   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:11.779155   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:11.792911   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:11.792949   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:11.868415   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:11.868443   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:11.868461   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:10.850225   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:13.351638   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:09.707347   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:11.709556   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:14.206996   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:11.994187   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:14.494457   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:14.447886   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:14.462144   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:14.462221   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:14.499160   67607 cri.go:89] found id: ""
	I0829 20:28:14.499185   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.499193   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:14.499200   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:14.499258   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:14.545736   67607 cri.go:89] found id: ""
	I0829 20:28:14.545764   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.545774   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:14.545780   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:14.545844   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:14.583626   67607 cri.go:89] found id: ""
	I0829 20:28:14.583664   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.583674   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:14.583682   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:14.583744   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:14.619876   67607 cri.go:89] found id: ""
	I0829 20:28:14.619909   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.619917   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:14.619923   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:14.619975   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:14.655750   67607 cri.go:89] found id: ""
	I0829 20:28:14.655778   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.655786   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:14.655791   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:14.655848   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:14.690759   67607 cri.go:89] found id: ""
	I0829 20:28:14.690785   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.690795   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:14.690800   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:14.690850   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:14.727238   67607 cri.go:89] found id: ""
	I0829 20:28:14.727269   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.727282   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:14.727289   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:14.727344   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:14.765962   67607 cri.go:89] found id: ""
	I0829 20:28:14.765996   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.766006   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:14.766017   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:14.766033   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:14.835749   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:14.835779   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:14.835797   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:14.914075   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:14.914112   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:14.952684   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:14.952712   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:15.004598   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:15.004635   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:17.518949   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:17.532175   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:17.532250   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:17.569943   67607 cri.go:89] found id: ""
	I0829 20:28:17.569971   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.569979   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:17.569985   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:17.570044   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:17.605472   67607 cri.go:89] found id: ""
	I0829 20:28:17.605502   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.605510   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:17.605515   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:17.605566   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:17.641568   67607 cri.go:89] found id: ""
	I0829 20:28:17.641593   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.641603   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:17.641610   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:17.641669   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:17.680870   67607 cri.go:89] found id: ""
	I0829 20:28:17.680895   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.680905   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:17.680916   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:17.680981   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:17.723546   67607 cri.go:89] found id: ""
	I0829 20:28:17.723576   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.723587   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:17.723594   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:17.723659   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:17.757934   67607 cri.go:89] found id: ""
	I0829 20:28:17.757962   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.757973   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:17.757980   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:17.758028   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:17.792641   67607 cri.go:89] found id: ""
	I0829 20:28:17.792670   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.792679   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:17.792685   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:17.792738   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:17.830776   67607 cri.go:89] found id: ""
	I0829 20:28:17.830800   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.830807   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:17.830815   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:17.830825   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:17.886331   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:17.886377   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:17.900111   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:17.900135   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:17.969538   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:17.969563   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:17.969577   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:18.050609   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:18.050649   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:15.850497   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:17.851663   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:16.707415   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:19.207313   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:16.994325   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:19.494247   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:20.590686   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:20.605066   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:20.605121   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:20.646028   67607 cri.go:89] found id: ""
	I0829 20:28:20.646058   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.646074   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:20.646082   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:20.646143   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:20.683433   67607 cri.go:89] found id: ""
	I0829 20:28:20.683469   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.683479   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:20.683487   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:20.683567   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:20.722737   67607 cri.go:89] found id: ""
	I0829 20:28:20.722765   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.722775   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:20.722782   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:20.722841   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:20.759777   67607 cri.go:89] found id: ""
	I0829 20:28:20.759800   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.759807   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:20.759812   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:20.759864   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:20.799142   67607 cri.go:89] found id: ""
	I0829 20:28:20.799164   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.799170   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:20.799176   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:20.799223   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:20.838331   67607 cri.go:89] found id: ""
	I0829 20:28:20.838357   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.838365   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:20.838371   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:20.838427   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:20.878066   67607 cri.go:89] found id: ""
	I0829 20:28:20.878099   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.878110   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:20.878117   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:20.878175   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:20.928940   67607 cri.go:89] found id: ""
	I0829 20:28:20.928966   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.928975   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:20.928982   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:20.928993   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:20.984435   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:20.984471   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:21.005860   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:21.005900   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:21.084092   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:21.084123   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:21.084138   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:21.165971   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:21.166009   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:23.705033   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:23.718332   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:23.718390   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:23.753594   67607 cri.go:89] found id: ""
	I0829 20:28:23.753625   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.753635   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:23.753650   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:23.753715   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:23.791840   67607 cri.go:89] found id: ""
	I0829 20:28:23.791864   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.791872   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:23.791878   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:23.791930   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:20.350028   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:22.350487   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:21.207839   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:23.707197   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:21.993965   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:23.994879   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:26.493735   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:23.837815   67607 cri.go:89] found id: ""
	I0829 20:28:23.837839   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.837846   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:23.837851   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:23.837908   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:23.873155   67607 cri.go:89] found id: ""
	I0829 20:28:23.873184   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.873194   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:23.873201   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:23.873265   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:23.908728   67607 cri.go:89] found id: ""
	I0829 20:28:23.908757   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.908768   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:23.908774   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:23.908834   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:23.946286   67607 cri.go:89] found id: ""
	I0829 20:28:23.946310   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.946320   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:23.946328   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:23.946392   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:23.983078   67607 cri.go:89] found id: ""
	I0829 20:28:23.983105   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.983115   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:23.983129   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:23.983190   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:24.020601   67607 cri.go:89] found id: ""
	I0829 20:28:24.020634   67607 logs.go:276] 0 containers: []
	W0829 20:28:24.020644   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:24.020654   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:24.020669   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:24.034438   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:24.034463   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:24.103209   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:24.103230   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:24.103243   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:24.182977   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:24.183016   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:24.224743   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:24.224834   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:26.781507   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:26.794301   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:26.794387   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:26.827218   67607 cri.go:89] found id: ""
	I0829 20:28:26.827243   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.827250   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:26.827257   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:26.827303   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:26.862643   67607 cri.go:89] found id: ""
	I0829 20:28:26.862673   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.862685   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:26.862693   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:26.862743   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:26.898127   67607 cri.go:89] found id: ""
	I0829 20:28:26.898159   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.898169   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:26.898177   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:26.898237   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:26.932119   67607 cri.go:89] found id: ""
	I0829 20:28:26.932146   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.932167   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:26.932174   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:26.932241   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:26.966380   67607 cri.go:89] found id: ""
	I0829 20:28:26.966413   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.966421   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:26.966427   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:26.966478   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:27.004350   67607 cri.go:89] found id: ""
	I0829 20:28:27.004372   67607 logs.go:276] 0 containers: []
	W0829 20:28:27.004379   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:27.004386   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:27.004436   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:27.041171   67607 cri.go:89] found id: ""
	I0829 20:28:27.041199   67607 logs.go:276] 0 containers: []
	W0829 20:28:27.041206   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:27.041212   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:27.041257   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:27.073993   67607 cri.go:89] found id: ""
	I0829 20:28:27.074031   67607 logs.go:276] 0 containers: []
	W0829 20:28:27.074041   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:27.074053   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:27.074066   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:27.148169   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:27.148199   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:27.148214   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:27.227174   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:27.227212   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:27.267180   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:27.267230   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:27.319034   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:27.319066   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:24.350754   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:26.850582   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:26.207974   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:28.707820   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:28.494090   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:30.994157   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:29.833497   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:29.846883   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:29.846951   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:29.884133   67607 cri.go:89] found id: ""
	I0829 20:28:29.884163   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.884175   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:29.884182   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:29.884247   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:29.917594   67607 cri.go:89] found id: ""
	I0829 20:28:29.917618   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.917628   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:29.917636   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:29.917696   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:29.952537   67607 cri.go:89] found id: ""
	I0829 20:28:29.952568   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.952576   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:29.952582   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:29.952630   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:29.988410   67607 cri.go:89] found id: ""
	I0829 20:28:29.988441   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.988448   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:29.988454   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:29.988511   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:30.026761   67607 cri.go:89] found id: ""
	I0829 20:28:30.026788   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.026796   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:30.026802   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:30.026861   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:30.063010   67607 cri.go:89] found id: ""
	I0829 20:28:30.063037   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.063046   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:30.063054   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:30.063109   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:30.098067   67607 cri.go:89] found id: ""
	I0829 20:28:30.098093   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.098101   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:30.098107   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:30.098161   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:30.132887   67607 cri.go:89] found id: ""
	I0829 20:28:30.132914   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.132921   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:30.132928   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:30.132940   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:30.184955   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:30.184990   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:30.198966   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:30.199004   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:30.268950   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:30.268977   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:30.268991   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:30.354222   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:30.354260   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:32.896554   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:32.911188   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:32.911271   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:32.945726   67607 cri.go:89] found id: ""
	I0829 20:28:32.945750   67607 logs.go:276] 0 containers: []
	W0829 20:28:32.945758   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:32.945773   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:32.945829   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:32.980234   67607 cri.go:89] found id: ""
	I0829 20:28:32.980267   67607 logs.go:276] 0 containers: []
	W0829 20:28:32.980275   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:32.980281   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:32.980329   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:33.019031   67607 cri.go:89] found id: ""
	I0829 20:28:33.019063   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.019071   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:33.019076   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:33.019126   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:33.056290   67607 cri.go:89] found id: ""
	I0829 20:28:33.056314   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.056322   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:33.056327   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:33.056391   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:33.090038   67607 cri.go:89] found id: ""
	I0829 20:28:33.090068   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.090078   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:33.090086   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:33.090152   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:33.125742   67607 cri.go:89] found id: ""
	I0829 20:28:33.125774   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.125782   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:33.125787   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:33.125849   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:33.159019   67607 cri.go:89] found id: ""
	I0829 20:28:33.159047   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.159058   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:33.159065   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:33.159125   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:33.197900   67607 cri.go:89] found id: ""
	I0829 20:28:33.197925   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.197933   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:33.197941   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:33.197955   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:33.250010   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:33.250040   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:33.263348   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:33.263374   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:33.342037   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:33.342065   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:33.342082   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:33.423324   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:33.423361   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:29.350275   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:31.350994   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:33.850866   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:30.713472   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:33.207271   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:32.995169   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:35.493980   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:35.963734   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:35.978648   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:35.978713   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:36.015326   67607 cri.go:89] found id: ""
	I0829 20:28:36.015350   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.015358   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:36.015364   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:36.015411   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:36.050840   67607 cri.go:89] found id: ""
	I0829 20:28:36.050869   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.050879   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:36.050886   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:36.050947   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:36.084048   67607 cri.go:89] found id: ""
	I0829 20:28:36.084076   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.084084   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:36.084090   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:36.084138   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:36.118655   67607 cri.go:89] found id: ""
	I0829 20:28:36.118682   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.118693   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:36.118702   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:36.118762   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:36.153879   67607 cri.go:89] found id: ""
	I0829 20:28:36.153908   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.153918   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:36.153926   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:36.153988   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:36.199834   67607 cri.go:89] found id: ""
	I0829 20:28:36.199858   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.199866   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:36.199872   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:36.199927   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:36.238098   67607 cri.go:89] found id: ""
	I0829 20:28:36.238129   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.238139   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:36.238146   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:36.238208   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:36.272091   67607 cri.go:89] found id: ""
	I0829 20:28:36.272124   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.272135   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:36.272146   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:36.272162   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:36.338478   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:36.338498   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:36.338510   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:36.418637   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:36.418671   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:36.458167   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:36.458194   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:36.508592   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:36.508630   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:36.351066   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:38.849684   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:35.706813   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:37.708058   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:38.003178   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:40.493065   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:39.022668   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:39.035897   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:39.035971   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:39.071155   67607 cri.go:89] found id: ""
	I0829 20:28:39.071185   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.071196   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:39.071203   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:39.071258   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:39.104135   67607 cri.go:89] found id: ""
	I0829 20:28:39.104177   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.104188   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:39.104206   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:39.104266   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:39.138301   67607 cri.go:89] found id: ""
	I0829 20:28:39.138329   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.138339   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:39.138346   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:39.138404   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:39.172674   67607 cri.go:89] found id: ""
	I0829 20:28:39.172700   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.172708   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:39.172719   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:39.172779   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:39.209810   67607 cri.go:89] found id: ""
	I0829 20:28:39.209836   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.209845   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:39.209852   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:39.209915   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:39.248692   67607 cri.go:89] found id: ""
	I0829 20:28:39.248715   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.248722   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:39.248728   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:39.248798   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:39.284303   67607 cri.go:89] found id: ""
	I0829 20:28:39.284333   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.284343   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:39.284351   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:39.284401   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:39.321346   67607 cri.go:89] found id: ""
	I0829 20:28:39.321375   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.321386   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:39.321396   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:39.321410   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:39.334678   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:39.334710   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:39.421992   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:39.422014   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:39.422027   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:39.503250   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:39.503280   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:39.540623   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:39.540654   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:42.092131   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:42.105440   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:42.105498   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:42.140994   67607 cri.go:89] found id: ""
	I0829 20:28:42.141024   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.141034   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:42.141042   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:42.141102   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:42.175182   67607 cri.go:89] found id: ""
	I0829 20:28:42.175217   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.175228   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:42.175248   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:42.175319   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:42.209251   67607 cri.go:89] found id: ""
	I0829 20:28:42.209281   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.209291   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:42.209299   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:42.209362   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:42.247944   67607 cri.go:89] found id: ""
	I0829 20:28:42.247970   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.247977   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:42.247983   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:42.248028   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:42.285613   67607 cri.go:89] found id: ""
	I0829 20:28:42.285644   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.285651   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:42.285657   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:42.285722   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:42.319826   67607 cri.go:89] found id: ""
	I0829 20:28:42.319851   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.319858   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:42.319864   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:42.319928   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:42.357150   67607 cri.go:89] found id: ""
	I0829 20:28:42.357173   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.357182   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:42.357189   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:42.357243   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:42.392150   67607 cri.go:89] found id: ""
	I0829 20:28:42.392170   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.392178   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:42.392185   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:42.392197   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:42.469240   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:42.469271   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:42.469286   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:42.549165   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:42.549198   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:42.591900   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:42.591930   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:42.642593   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:42.642625   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:40.851544   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:43.350420   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:39.708341   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:42.206888   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:44.207934   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:42.494791   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:44.992992   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:45.157092   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:45.170832   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:45.170916   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:45.207210   67607 cri.go:89] found id: ""
	I0829 20:28:45.207235   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.207244   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:45.207251   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:45.207308   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:45.245321   67607 cri.go:89] found id: ""
	I0829 20:28:45.245352   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.245362   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:45.245379   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:45.245448   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:45.280326   67607 cri.go:89] found id: ""
	I0829 20:28:45.280369   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.280381   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:45.280389   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:45.280451   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:45.318294   67607 cri.go:89] found id: ""
	I0829 20:28:45.318322   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.318333   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:45.318340   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:45.318411   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:45.352903   67607 cri.go:89] found id: ""
	I0829 20:28:45.352925   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.352932   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:45.352938   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:45.352990   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:45.389251   67607 cri.go:89] found id: ""
	I0829 20:28:45.389273   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.389280   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:45.389286   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:45.389340   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:45.424348   67607 cri.go:89] found id: ""
	I0829 20:28:45.424385   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.424397   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:45.424404   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:45.424453   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:45.459058   67607 cri.go:89] found id: ""
	I0829 20:28:45.459087   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.459098   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:45.459109   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:45.459124   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:45.510386   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:45.510423   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:45.524896   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:45.524923   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:45.593987   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:45.594064   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:45.594082   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:45.668738   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:45.668771   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:48.206497   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:48.219625   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:48.219696   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:48.254936   67607 cri.go:89] found id: ""
	I0829 20:28:48.254959   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.254966   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:48.254971   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:48.255018   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:48.290826   67607 cri.go:89] found id: ""
	I0829 20:28:48.290851   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.290859   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:48.290864   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:48.290910   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:48.327508   67607 cri.go:89] found id: ""
	I0829 20:28:48.327533   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.327540   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:48.327546   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:48.327593   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:48.364492   67607 cri.go:89] found id: ""
	I0829 20:28:48.364517   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.364525   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:48.364530   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:48.364580   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:48.400035   67607 cri.go:89] found id: ""
	I0829 20:28:48.400062   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.400072   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:48.400079   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:48.400144   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:48.433999   67607 cri.go:89] found id: ""
	I0829 20:28:48.434026   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.434035   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:48.434043   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:48.434104   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:48.468841   67607 cri.go:89] found id: ""
	I0829 20:28:48.468873   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.468889   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:48.468903   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:48.468971   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:48.506557   67607 cri.go:89] found id: ""
	I0829 20:28:48.506589   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.506598   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:48.506609   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:48.506624   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:48.577023   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:48.577044   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:48.577056   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:48.654372   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:48.654407   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:48.691125   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:48.691152   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:48.746383   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:48.746414   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:45.350581   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:47.351437   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:46.705575   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:48.707018   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:46.993532   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:48.994284   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:51.494177   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:51.260591   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:51.273911   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:51.273974   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:51.311517   67607 cri.go:89] found id: ""
	I0829 20:28:51.311545   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.311553   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:51.311567   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:51.311616   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:51.348220   67607 cri.go:89] found id: ""
	I0829 20:28:51.348247   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.348256   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:51.348264   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:51.348321   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:51.383560   67607 cri.go:89] found id: ""
	I0829 20:28:51.383599   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.383611   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:51.383619   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:51.383680   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:51.419241   67607 cri.go:89] found id: ""
	I0829 20:28:51.419268   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.419278   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:51.419286   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:51.419343   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:51.453954   67607 cri.go:89] found id: ""
	I0829 20:28:51.453979   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.453986   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:51.453992   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:51.454047   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:51.489457   67607 cri.go:89] found id: ""
	I0829 20:28:51.489480   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.489488   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:51.489493   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:51.489544   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:51.524072   67607 cri.go:89] found id: ""
	I0829 20:28:51.524100   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.524107   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:51.524113   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:51.524160   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:51.561238   67607 cri.go:89] found id: ""
	I0829 20:28:51.561263   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.561271   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:51.561279   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:51.561290   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:51.615422   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:51.615462   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:51.632180   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:51.632216   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:51.704335   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:51.704363   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:51.704378   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:51.794219   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:51.794260   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:49.852140   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:52.351142   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:51.205903   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:53.207651   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:53.495412   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:55.993489   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:54.342556   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:54.356325   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:54.356400   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:54.390928   67607 cri.go:89] found id: ""
	I0829 20:28:54.390952   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.390959   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:54.390965   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:54.391011   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:54.426970   67607 cri.go:89] found id: ""
	I0829 20:28:54.427002   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.427013   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:54.427020   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:54.427074   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:54.464121   67607 cri.go:89] found id: ""
	I0829 20:28:54.464155   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.464166   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:54.464174   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:54.464236   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:54.499790   67607 cri.go:89] found id: ""
	I0829 20:28:54.499816   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.499827   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:54.499840   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:54.499889   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:54.537212   67607 cri.go:89] found id: ""
	I0829 20:28:54.537239   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.537249   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:54.537256   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:54.537314   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:54.575370   67607 cri.go:89] found id: ""
	I0829 20:28:54.575399   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.575410   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:54.575417   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:54.575469   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:54.608403   67607 cri.go:89] found id: ""
	I0829 20:28:54.608432   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.608443   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:54.608453   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:54.608514   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:54.645259   67607 cri.go:89] found id: ""
	I0829 20:28:54.645285   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.645292   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:54.645300   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:54.645311   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:54.697022   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:54.697063   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:54.712873   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:54.712914   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:54.814253   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:54.814278   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:54.814295   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:54.896473   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:54.896507   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:57.441648   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:57.455245   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:57.455321   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:57.495365   67607 cri.go:89] found id: ""
	I0829 20:28:57.495397   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.495405   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:57.495411   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:57.495472   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:57.529555   67607 cri.go:89] found id: ""
	I0829 20:28:57.529582   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.529590   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:57.529597   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:57.529667   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:57.564168   67607 cri.go:89] found id: ""
	I0829 20:28:57.564196   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.564208   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:57.564215   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:57.564277   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:57.602057   67607 cri.go:89] found id: ""
	I0829 20:28:57.602089   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.602100   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:57.602108   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:57.602194   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:57.638195   67607 cri.go:89] found id: ""
	I0829 20:28:57.638226   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.638235   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:57.638244   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:57.638307   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:57.674556   67607 cri.go:89] found id: ""
	I0829 20:28:57.674605   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.674615   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:57.674623   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:57.674680   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:57.709256   67607 cri.go:89] found id: ""
	I0829 20:28:57.709282   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.709291   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:57.709298   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:57.709358   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:57.743629   67607 cri.go:89] found id: ""
	I0829 20:28:57.743652   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.743659   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:57.743668   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:57.743679   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:57.789067   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:57.789098   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:57.843372   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:57.843403   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:57.858630   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:57.858661   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:57.927776   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:57.927798   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:57.927814   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:54.850906   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:56.851300   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:55.208638   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:57.707756   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:57.994287   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:00.493343   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:00.508180   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:00.521451   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:00.521529   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:00.557912   67607 cri.go:89] found id: ""
	I0829 20:29:00.557938   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.557945   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:00.557951   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:00.557997   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:00.595186   67607 cri.go:89] found id: ""
	I0829 20:29:00.595215   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.595226   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:00.595237   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:00.595299   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:00.631553   67607 cri.go:89] found id: ""
	I0829 20:29:00.631581   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.631592   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:00.631600   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:00.631660   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:00.666502   67607 cri.go:89] found id: ""
	I0829 20:29:00.666525   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.666551   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:00.666560   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:00.666621   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:00.700797   67607 cri.go:89] found id: ""
	I0829 20:29:00.700824   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.700835   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:00.700842   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:00.700908   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:00.739957   67607 cri.go:89] found id: ""
	I0829 20:29:00.739976   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.739989   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:00.739994   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:00.740035   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:00.800704   67607 cri.go:89] found id: ""
	I0829 20:29:00.800740   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.800750   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:00.800757   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:00.800820   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:00.837678   67607 cri.go:89] found id: ""
	I0829 20:29:00.837704   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.837712   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:00.837720   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:00.837731   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:00.888359   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:00.888391   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:00.903074   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:00.903103   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:00.964865   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:00.964885   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:00.964898   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:01.049351   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:01.049387   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:03.589829   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:03.603120   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:03.603192   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:03.637647   67607 cri.go:89] found id: ""
	I0829 20:29:03.637672   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.637678   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:03.637684   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:03.637732   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:03.673807   67607 cri.go:89] found id: ""
	I0829 20:29:03.673842   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.673852   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:03.673860   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:03.673918   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:03.709490   67607 cri.go:89] found id: ""
	I0829 20:29:03.709516   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.709527   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:03.709533   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:03.709595   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:03.751662   67607 cri.go:89] found id: ""
	I0829 20:29:03.751688   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.751696   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:03.751702   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:03.751751   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:03.787861   67607 cri.go:89] found id: ""
	I0829 20:29:03.787896   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.787908   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:03.787917   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:03.787977   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:59.350888   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:01.850615   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:03.851438   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:00.207912   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:02.707309   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:02.493506   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:04.494305   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:03.824383   67607 cri.go:89] found id: ""
	I0829 20:29:03.824413   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.824431   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:03.824438   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:03.824499   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:03.863904   67607 cri.go:89] found id: ""
	I0829 20:29:03.863929   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.863937   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:03.863943   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:03.863990   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:03.902336   67607 cri.go:89] found id: ""
	I0829 20:29:03.902360   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.902368   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:03.902375   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:03.902386   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:03.951468   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:03.951499   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:03.965789   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:03.965816   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:04.035096   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:04.035119   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:04.035193   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:04.115842   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:04.115876   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:06.662652   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:06.676508   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:06.676583   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:06.713058   67607 cri.go:89] found id: ""
	I0829 20:29:06.713084   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.713093   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:06.713101   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:06.713171   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:06.747513   67607 cri.go:89] found id: ""
	I0829 20:29:06.747544   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.747552   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:06.747557   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:06.747617   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:06.782662   67607 cri.go:89] found id: ""
	I0829 20:29:06.782689   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.782695   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:06.782701   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:06.782758   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:06.818472   67607 cri.go:89] found id: ""
	I0829 20:29:06.818500   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.818510   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:06.818516   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:06.818586   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:06.852928   67607 cri.go:89] found id: ""
	I0829 20:29:06.852954   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.852964   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:06.852974   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:06.853032   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:06.893859   67607 cri.go:89] found id: ""
	I0829 20:29:06.893889   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.893899   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:06.893907   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:06.893969   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:06.931552   67607 cri.go:89] found id: ""
	I0829 20:29:06.931584   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.931594   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:06.931601   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:06.931662   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:06.967210   67607 cri.go:89] found id: ""
	I0829 20:29:06.967243   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.967254   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:06.967266   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:06.967279   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:07.020595   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:07.020631   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:07.034738   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:07.034764   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:07.103726   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:07.103747   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:07.103760   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:07.184727   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:07.184764   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:06.350610   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:08.351571   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:05.207055   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:07.207650   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:06.994653   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:09.493932   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:09.746639   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:09.761228   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:09.761308   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:09.802071   67607 cri.go:89] found id: ""
	I0829 20:29:09.802102   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.802113   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:09.802122   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:09.802180   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:09.837352   67607 cri.go:89] found id: ""
	I0829 20:29:09.837385   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.837395   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:09.837402   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:09.837464   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:09.874951   67607 cri.go:89] found id: ""
	I0829 20:29:09.874980   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.874992   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:09.874999   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:09.875055   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:09.909660   67607 cri.go:89] found id: ""
	I0829 20:29:09.909696   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.909706   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:09.909713   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:09.909777   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:09.949727   67607 cri.go:89] found id: ""
	I0829 20:29:09.949751   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.949759   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:09.949765   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:09.949825   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:09.984576   67607 cri.go:89] found id: ""
	I0829 20:29:09.984609   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.984617   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:09.984623   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:09.984675   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:10.022499   67607 cri.go:89] found id: ""
	I0829 20:29:10.022523   67607 logs.go:276] 0 containers: []
	W0829 20:29:10.022530   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:10.022553   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:10.022624   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:10.064308   67607 cri.go:89] found id: ""
	I0829 20:29:10.064346   67607 logs.go:276] 0 containers: []
	W0829 20:29:10.064356   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:10.064367   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:10.064382   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:10.113505   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:10.113537   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:10.127614   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:10.127640   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:10.200558   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:10.200579   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:10.200592   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:10.292984   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:10.293020   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:12.833100   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:12.846645   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:12.846712   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:12.885396   67607 cri.go:89] found id: ""
	I0829 20:29:12.885423   67607 logs.go:276] 0 containers: []
	W0829 20:29:12.885430   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:12.885436   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:12.885486   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:12.922556   67607 cri.go:89] found id: ""
	I0829 20:29:12.922584   67607 logs.go:276] 0 containers: []
	W0829 20:29:12.922595   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:12.922602   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:12.922688   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:12.965294   67607 cri.go:89] found id: ""
	I0829 20:29:12.965324   67607 logs.go:276] 0 containers: []
	W0829 20:29:12.965335   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:12.965342   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:12.965401   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:13.022911   67607 cri.go:89] found id: ""
	I0829 20:29:13.022934   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.022942   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:13.022948   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:13.023009   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:13.077009   67607 cri.go:89] found id: ""
	I0829 20:29:13.077035   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.077043   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:13.077048   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:13.077095   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:13.114202   67607 cri.go:89] found id: ""
	I0829 20:29:13.114233   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.114243   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:13.114251   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:13.114315   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:13.147025   67607 cri.go:89] found id: ""
	I0829 20:29:13.147049   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.147057   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:13.147063   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:13.147110   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:13.183112   67607 cri.go:89] found id: ""
	I0829 20:29:13.183138   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.183148   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:13.183159   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:13.183173   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:13.240558   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:13.240595   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:13.255563   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:13.255589   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:13.322826   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:13.322846   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:13.322857   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:13.399330   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:13.399365   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:10.850650   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:12.852188   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:09.706791   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:11.707397   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:13.708663   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:11.993311   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:13.994310   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:16.494854   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:15.938467   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:15.951742   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:15.951812   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:15.987492   67607 cri.go:89] found id: ""
	I0829 20:29:15.987517   67607 logs.go:276] 0 containers: []
	W0829 20:29:15.987524   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:15.987530   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:15.987575   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:16.024187   67607 cri.go:89] found id: ""
	I0829 20:29:16.024214   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.024223   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:16.024231   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:16.024291   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:16.058141   67607 cri.go:89] found id: ""
	I0829 20:29:16.058164   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.058171   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:16.058176   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:16.058225   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:16.092390   67607 cri.go:89] found id: ""
	I0829 20:29:16.092414   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.092421   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:16.092427   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:16.092472   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:16.130178   67607 cri.go:89] found id: ""
	I0829 20:29:16.130209   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.130219   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:16.130227   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:16.130289   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:16.163867   67607 cri.go:89] found id: ""
	I0829 20:29:16.163900   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.163907   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:16.163913   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:16.163964   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:16.197764   67607 cri.go:89] found id: ""
	I0829 20:29:16.197792   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.197798   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:16.197804   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:16.197850   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:16.233357   67607 cri.go:89] found id: ""
	I0829 20:29:16.233383   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.233393   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:16.233403   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:16.233418   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:16.285154   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:16.285188   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:16.299057   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:16.299085   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:16.377021   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:16.377041   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:16.377062   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:16.457750   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:16.457796   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:15.350415   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:17.850927   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:16.206841   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:18.207273   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:18.993478   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:21.493806   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:18.999133   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:19.016143   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:19.016223   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:19.049225   67607 cri.go:89] found id: ""
	I0829 20:29:19.049252   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.049259   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:19.049265   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:19.049317   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:19.085237   67607 cri.go:89] found id: ""
	I0829 20:29:19.085297   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.085314   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:19.085325   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:19.085389   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:19.123476   67607 cri.go:89] found id: ""
	I0829 20:29:19.123501   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.123509   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:19.123514   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:19.123571   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:19.159958   67607 cri.go:89] found id: ""
	I0829 20:29:19.159984   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.159993   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:19.160001   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:19.160055   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:19.192385   67607 cri.go:89] found id: ""
	I0829 20:29:19.192410   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.192418   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:19.192423   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:19.192483   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:19.230781   67607 cri.go:89] found id: ""
	I0829 20:29:19.230804   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.230811   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:19.230816   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:19.230868   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:19.264925   67607 cri.go:89] found id: ""
	I0829 20:29:19.264954   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.264964   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:19.264972   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:19.265032   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:19.302461   67607 cri.go:89] found id: ""
	I0829 20:29:19.302484   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.302491   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:19.302499   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:19.302510   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:19.384799   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:19.384833   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:19.425281   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:19.425313   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:19.477380   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:19.477412   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:19.492315   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:19.492350   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:19.563428   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:22.064407   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:22.078609   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:22.078670   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:22.112630   67607 cri.go:89] found id: ""
	I0829 20:29:22.112662   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.112672   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:22.112680   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:22.112741   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:22.149078   67607 cri.go:89] found id: ""
	I0829 20:29:22.149108   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.149117   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:22.149124   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:22.149186   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:22.184568   67607 cri.go:89] found id: ""
	I0829 20:29:22.184596   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.184605   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:22.184613   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:22.184682   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:22.220881   67607 cri.go:89] found id: ""
	I0829 20:29:22.220908   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.220919   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:22.220926   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:22.220987   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:22.256280   67607 cri.go:89] found id: ""
	I0829 20:29:22.256305   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.256314   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:22.256321   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:22.256386   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:22.294546   67607 cri.go:89] found id: ""
	I0829 20:29:22.294580   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.294590   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:22.294597   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:22.294660   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:22.332178   67607 cri.go:89] found id: ""
	I0829 20:29:22.332207   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.332215   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:22.332220   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:22.332266   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:22.368283   67607 cri.go:89] found id: ""
	I0829 20:29:22.368309   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.368317   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:22.368325   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:22.368336   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:22.421800   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:22.421836   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:22.435539   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:22.435565   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:22.504402   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:22.504427   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:22.504441   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:22.588293   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:22.588326   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:19.851801   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:22.351929   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:20.207342   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:22.707546   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:23.493994   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:25.993337   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:25.130766   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:25.144479   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:25.144554   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:25.181606   67607 cri.go:89] found id: ""
	I0829 20:29:25.181636   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.181643   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:25.181649   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:25.181697   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:25.220291   67607 cri.go:89] found id: ""
	I0829 20:29:25.220320   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.220328   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:25.220335   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:25.220447   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:25.260947   67607 cri.go:89] found id: ""
	I0829 20:29:25.260975   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.260983   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:25.260988   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:25.261035   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:25.298200   67607 cri.go:89] found id: ""
	I0829 20:29:25.298232   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.298243   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:25.298256   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:25.298314   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:25.333128   67607 cri.go:89] found id: ""
	I0829 20:29:25.333162   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.333174   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:25.333181   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:25.333232   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:25.368951   67607 cri.go:89] found id: ""
	I0829 20:29:25.368979   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.368989   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:25.368997   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:25.369052   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:25.403687   67607 cri.go:89] found id: ""
	I0829 20:29:25.403715   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.403726   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:25.403734   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:25.403799   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:25.442338   67607 cri.go:89] found id: ""
	I0829 20:29:25.442365   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.442372   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:25.442381   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:25.442395   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:25.456313   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:25.456335   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:25.528709   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:25.528730   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:25.528744   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:25.609976   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:25.610011   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:25.650044   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:25.650071   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:28.202683   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:28.216971   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:28.217046   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:28.256297   67607 cri.go:89] found id: ""
	I0829 20:29:28.256321   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.256329   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:28.256335   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:28.256379   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:28.289396   67607 cri.go:89] found id: ""
	I0829 20:29:28.289420   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.289427   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:28.289433   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:28.289484   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:28.323589   67607 cri.go:89] found id: ""
	I0829 20:29:28.323616   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.323623   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:28.323630   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:28.323676   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:28.362423   67607 cri.go:89] found id: ""
	I0829 20:29:28.362453   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.362463   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:28.362471   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:28.362531   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:28.396967   67607 cri.go:89] found id: ""
	I0829 20:29:28.396990   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.396998   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:28.397003   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:28.397053   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:28.430714   67607 cri.go:89] found id: ""
	I0829 20:29:28.430744   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.430755   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:28.430762   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:28.430831   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:28.468668   67607 cri.go:89] found id: ""
	I0829 20:29:28.468696   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.468707   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:28.468714   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:28.468777   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:28.506678   67607 cri.go:89] found id: ""
	I0829 20:29:28.506705   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.506716   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:28.506727   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:28.506741   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:28.545259   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:28.545287   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:28.598249   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:28.598285   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:28.612385   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:28.612429   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:28.685765   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:28.685792   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:28.685806   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:24.851688   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:27.350456   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:24.708523   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:27.206094   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:29.207859   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:27.995492   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:30.494340   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:31.270074   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:31.284357   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:31.284417   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:31.319530   67607 cri.go:89] found id: ""
	I0829 20:29:31.319558   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.319566   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:31.319571   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:31.319640   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:31.356826   67607 cri.go:89] found id: ""
	I0829 20:29:31.356856   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.356867   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:31.356880   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:31.356934   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:31.390137   67607 cri.go:89] found id: ""
	I0829 20:29:31.390160   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.390167   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:31.390173   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:31.390219   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:31.424939   67607 cri.go:89] found id: ""
	I0829 20:29:31.424972   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.424989   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:31.424997   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:31.425054   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:31.460896   67607 cri.go:89] found id: ""
	I0829 20:29:31.460921   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.460928   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:31.460935   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:31.460985   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:31.498933   67607 cri.go:89] found id: ""
	I0829 20:29:31.498957   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.498967   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:31.498975   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:31.499044   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:31.534953   67607 cri.go:89] found id: ""
	I0829 20:29:31.534985   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.534996   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:31.535003   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:31.535065   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:31.576248   67607 cri.go:89] found id: ""
	I0829 20:29:31.576273   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.576281   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:31.576291   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:31.576307   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:31.628157   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:31.628196   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:31.641564   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:31.641591   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:31.719949   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:31.719973   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:31.719996   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:31.795682   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:31.795716   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:29.351248   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:31.351424   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:33.851397   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:31.707552   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:34.207468   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:32.993432   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:34.993634   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:34.333468   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:34.347294   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:34.347370   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:34.384885   67607 cri.go:89] found id: ""
	I0829 20:29:34.384910   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.384921   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:34.384928   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:34.384991   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:34.422309   67607 cri.go:89] found id: ""
	I0829 20:29:34.422341   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.422351   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:34.422358   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:34.422417   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:34.459800   67607 cri.go:89] found id: ""
	I0829 20:29:34.459826   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.459834   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:34.459840   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:34.459905   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:34.495600   67607 cri.go:89] found id: ""
	I0829 20:29:34.495624   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.495633   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:34.495647   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:34.495708   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:34.531749   67607 cri.go:89] found id: ""
	I0829 20:29:34.531777   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.531788   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:34.531795   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:34.531856   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:34.571057   67607 cri.go:89] found id: ""
	I0829 20:29:34.571088   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.571098   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:34.571105   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:34.571168   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:34.609645   67607 cri.go:89] found id: ""
	I0829 20:29:34.609676   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.609687   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:34.609695   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:34.609753   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:34.647199   67607 cri.go:89] found id: ""
	I0829 20:29:34.647233   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.647244   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:34.647255   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:34.647269   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:34.661390   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:34.661420   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:34.737590   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:34.737613   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:34.737625   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:34.820682   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:34.820721   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:34.861697   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:34.861723   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:37.412384   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:37.426081   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:37.426162   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:37.461302   67607 cri.go:89] found id: ""
	I0829 20:29:37.461332   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.461342   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:37.461349   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:37.461416   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:37.500869   67607 cri.go:89] found id: ""
	I0829 20:29:37.500898   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.500908   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:37.500915   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:37.500970   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:37.536908   67607 cri.go:89] found id: ""
	I0829 20:29:37.536932   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.536942   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:37.536949   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:37.537010   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:37.571939   67607 cri.go:89] found id: ""
	I0829 20:29:37.571969   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.571979   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:37.571987   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:37.572048   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:37.607834   67607 cri.go:89] found id: ""
	I0829 20:29:37.607864   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.607883   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:37.607891   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:37.607952   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:37.643932   67607 cri.go:89] found id: ""
	I0829 20:29:37.643963   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.643971   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:37.643978   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:37.644037   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:37.678148   67607 cri.go:89] found id: ""
	I0829 20:29:37.678177   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.678188   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:37.678195   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:37.678257   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:37.713170   67607 cri.go:89] found id: ""
	I0829 20:29:37.713195   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.713209   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:37.713219   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:37.713233   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:37.752538   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:37.752567   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:37.802888   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:37.802923   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:37.816546   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:37.816585   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:37.891647   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:37.891667   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:37.891680   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:35.851668   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:38.351371   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:36.208220   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:38.708523   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:36.994441   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:39.493291   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:40.472354   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:40.486186   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:40.486252   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:40.520935   67607 cri.go:89] found id: ""
	I0829 20:29:40.520963   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.520971   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:40.520977   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:40.521037   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:40.561399   67607 cri.go:89] found id: ""
	I0829 20:29:40.561428   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.561440   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:40.561447   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:40.561514   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:40.601821   67607 cri.go:89] found id: ""
	I0829 20:29:40.601846   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.601855   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:40.601862   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:40.601918   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:40.636429   67607 cri.go:89] found id: ""
	I0829 20:29:40.636454   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.636462   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:40.636468   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:40.636525   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:40.670781   67607 cri.go:89] found id: ""
	I0829 20:29:40.670816   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.670828   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:40.670836   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:40.670912   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:40.706635   67607 cri.go:89] found id: ""
	I0829 20:29:40.706663   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.706674   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:40.706682   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:40.706739   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:40.741657   67607 cri.go:89] found id: ""
	I0829 20:29:40.741687   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.741695   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:40.741707   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:40.741770   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:40.777028   67607 cri.go:89] found id: ""
	I0829 20:29:40.777057   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.777066   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:40.777077   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:40.777093   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:40.829387   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:40.829424   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:40.843928   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:40.843956   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:40.917965   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:40.917992   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:40.918008   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:41.001880   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:41.001925   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:43.549007   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:43.563446   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:43.563502   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:43.598503   67607 cri.go:89] found id: ""
	I0829 20:29:43.598548   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.598557   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:43.598564   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:43.598614   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:43.634169   67607 cri.go:89] found id: ""
	I0829 20:29:43.634200   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.634210   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:43.634218   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:43.634280   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:43.670467   67607 cri.go:89] found id: ""
	I0829 20:29:43.670492   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.670500   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:43.670506   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:43.670580   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:43.706812   67607 cri.go:89] found id: ""
	I0829 20:29:43.706839   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.706849   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:43.706857   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:43.706922   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:43.741577   67607 cri.go:89] found id: ""
	I0829 20:29:43.741606   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.741612   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:43.741620   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:43.741700   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:43.776552   67607 cri.go:89] found id: ""
	I0829 20:29:43.776595   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.776625   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:43.776635   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:43.776701   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:43.816229   67607 cri.go:89] found id: ""
	I0829 20:29:43.816264   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.816274   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:43.816281   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:43.816346   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:40.850705   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:42.850904   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:40.709080   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:43.207700   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:41.994216   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:44.492986   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:46.494171   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:43.860726   67607 cri.go:89] found id: ""
	I0829 20:29:43.860753   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.860761   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:43.860768   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:43.860783   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:43.874311   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:43.874340   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:43.952243   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:43.952272   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:43.952288   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:44.032276   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:44.032312   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:44.075537   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:44.075571   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:46.632798   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:46.645878   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:46.645948   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:46.683682   67607 cri.go:89] found id: ""
	I0829 20:29:46.683711   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.683720   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:46.683726   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:46.683775   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:46.727985   67607 cri.go:89] found id: ""
	I0829 20:29:46.728012   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.728024   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:46.728031   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:46.728090   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:46.762142   67607 cri.go:89] found id: ""
	I0829 20:29:46.762166   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.762174   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:46.762180   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:46.762226   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:46.802423   67607 cri.go:89] found id: ""
	I0829 20:29:46.802453   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.802464   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:46.802471   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:46.802515   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:46.840382   67607 cri.go:89] found id: ""
	I0829 20:29:46.840411   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.840418   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:46.840425   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:46.840473   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:46.878438   67607 cri.go:89] found id: ""
	I0829 20:29:46.878466   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.878476   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:46.878483   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:46.878562   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:46.913589   67607 cri.go:89] found id: ""
	I0829 20:29:46.913618   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.913625   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:46.913631   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:46.913678   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:46.948894   67607 cri.go:89] found id: ""
	I0829 20:29:46.948922   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.948929   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:46.948938   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:46.948949   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:47.005709   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:47.005745   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:47.030316   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:47.030343   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:47.105899   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:47.105920   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:47.105932   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:47.189405   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:47.189442   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:45.352639   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:47.850647   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:45.709140   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:48.207411   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:48.994239   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:51.493287   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:49.727745   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:49.742061   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:49.742131   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:49.777428   67607 cri.go:89] found id: ""
	I0829 20:29:49.777456   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.777464   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:49.777471   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:49.777531   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:49.811611   67607 cri.go:89] found id: ""
	I0829 20:29:49.811639   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.811646   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:49.811653   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:49.811709   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:49.844962   67607 cri.go:89] found id: ""
	I0829 20:29:49.844987   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.844995   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:49.845006   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:49.845062   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:49.880259   67607 cri.go:89] found id: ""
	I0829 20:29:49.880286   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.880297   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:49.880305   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:49.880366   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:49.915889   67607 cri.go:89] found id: ""
	I0829 20:29:49.915918   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.915926   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:49.915932   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:49.915988   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:49.953146   67607 cri.go:89] found id: ""
	I0829 20:29:49.953174   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.953182   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:49.953189   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:49.953240   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:49.990689   67607 cri.go:89] found id: ""
	I0829 20:29:49.990721   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.990730   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:49.990738   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:49.990792   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:50.024775   67607 cri.go:89] found id: ""
	I0829 20:29:50.024806   67607 logs.go:276] 0 containers: []
	W0829 20:29:50.024817   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:50.024827   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:50.024842   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:50.079030   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:50.079064   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:50.093178   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:50.093205   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:50.171476   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:50.171499   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:50.171512   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:50.252913   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:50.252946   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:52.799818   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:52.812857   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:52.812930   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:52.850736   67607 cri.go:89] found id: ""
	I0829 20:29:52.850761   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.850770   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:52.850777   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:52.850834   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:52.888892   67607 cri.go:89] found id: ""
	I0829 20:29:52.888916   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.888923   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:52.888929   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:52.888975   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:52.925390   67607 cri.go:89] found id: ""
	I0829 20:29:52.925418   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.925428   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:52.925435   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:52.925501   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:52.960329   67607 cri.go:89] found id: ""
	I0829 20:29:52.960352   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.960360   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:52.960366   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:52.960413   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:52.994899   67607 cri.go:89] found id: ""
	I0829 20:29:52.994927   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.994935   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:52.994941   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:52.994995   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:53.033028   67607 cri.go:89] found id: ""
	I0829 20:29:53.033057   67607 logs.go:276] 0 containers: []
	W0829 20:29:53.033068   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:53.033076   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:53.033136   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:53.068353   67607 cri.go:89] found id: ""
	I0829 20:29:53.068381   67607 logs.go:276] 0 containers: []
	W0829 20:29:53.068389   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:53.068394   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:53.068441   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:53.104496   67607 cri.go:89] found id: ""
	I0829 20:29:53.104524   67607 logs.go:276] 0 containers: []
	W0829 20:29:53.104534   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:53.104545   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:53.104560   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:53.175777   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:53.175810   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:53.175827   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:53.257362   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:53.257396   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:53.295822   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:53.295850   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:53.351237   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:53.351263   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:49.851324   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:52.350768   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:50.707986   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:53.206918   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:53.494828   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:55.994443   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:55.864680   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:55.879324   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:55.879391   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:55.914454   67607 cri.go:89] found id: ""
	I0829 20:29:55.914479   67607 logs.go:276] 0 containers: []
	W0829 20:29:55.914490   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:55.914498   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:55.914592   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:55.953778   67607 cri.go:89] found id: ""
	I0829 20:29:55.953804   67607 logs.go:276] 0 containers: []
	W0829 20:29:55.953814   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:55.953821   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:55.953883   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:55.994659   67607 cri.go:89] found id: ""
	I0829 20:29:55.994681   67607 logs.go:276] 0 containers: []
	W0829 20:29:55.994689   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:55.994697   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:55.994768   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:56.031262   67607 cri.go:89] found id: ""
	I0829 20:29:56.031288   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.031299   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:56.031306   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:56.031366   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:56.063748   67607 cri.go:89] found id: ""
	I0829 20:29:56.063776   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.063785   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:56.063793   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:56.063883   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:56.098024   67607 cri.go:89] found id: ""
	I0829 20:29:56.098060   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.098068   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:56.098074   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:56.098127   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:56.141340   67607 cri.go:89] found id: ""
	I0829 20:29:56.141364   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.141374   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:56.141381   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:56.141440   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:56.176668   67607 cri.go:89] found id: ""
	I0829 20:29:56.176696   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.176707   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:56.176717   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:56.176731   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:56.216294   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:56.216322   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:56.269404   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:56.269440   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:56.283134   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:56.283160   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:56.355005   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:56.355023   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:56.355035   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:54.851658   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:57.350247   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:55.207477   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:57.708007   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:58.493689   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:00.998990   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:58.937406   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:58.950924   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:58.950981   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:58.986748   67607 cri.go:89] found id: ""
	I0829 20:29:58.986778   67607 logs.go:276] 0 containers: []
	W0829 20:29:58.986788   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:58.986795   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:58.986861   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:59.023737   67607 cri.go:89] found id: ""
	I0829 20:29:59.023763   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.023773   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:59.023780   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:59.023840   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:59.060245   67607 cri.go:89] found id: ""
	I0829 20:29:59.060274   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.060284   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:59.060291   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:59.060352   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:59.102467   67607 cri.go:89] found id: ""
	I0829 20:29:59.102493   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.102501   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:59.102507   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:59.102581   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:59.142601   67607 cri.go:89] found id: ""
	I0829 20:29:59.142625   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.142634   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:59.142647   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:59.142717   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:59.186683   67607 cri.go:89] found id: ""
	I0829 20:29:59.186707   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.186715   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:59.186723   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:59.186783   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:59.232104   67607 cri.go:89] found id: ""
	I0829 20:29:59.232136   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.232154   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:59.232162   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:59.232227   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:59.276416   67607 cri.go:89] found id: ""
	I0829 20:29:59.276442   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.276452   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:59.276462   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:59.276479   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:59.341741   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:59.341779   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:59.357312   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:59.357336   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:59.425653   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:59.425674   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:59.425689   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:59.505365   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:59.505403   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:02.049195   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:02.064558   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:02.064641   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:02.102141   67607 cri.go:89] found id: ""
	I0829 20:30:02.102188   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.102209   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:02.102217   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:02.102282   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:02.138610   67607 cri.go:89] found id: ""
	I0829 20:30:02.138640   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.138650   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:02.138658   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:02.138724   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:02.175391   67607 cri.go:89] found id: ""
	I0829 20:30:02.175423   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.175435   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:02.175442   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:02.175505   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:02.212956   67607 cri.go:89] found id: ""
	I0829 20:30:02.212981   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.212991   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:02.212998   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:02.213059   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:02.254444   67607 cri.go:89] found id: ""
	I0829 20:30:02.254467   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.254475   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:02.254481   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:02.254568   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:02.293232   67607 cri.go:89] found id: ""
	I0829 20:30:02.293260   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.293270   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:02.293277   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:02.293348   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:02.328300   67607 cri.go:89] found id: ""
	I0829 20:30:02.328329   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.328339   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:02.328346   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:02.328407   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:02.363467   67607 cri.go:89] found id: ""
	I0829 20:30:02.363495   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.363505   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:02.363514   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:02.363528   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:02.414357   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:02.414394   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:02.428229   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:02.428259   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:02.503640   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:02.503661   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:02.503674   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:02.584052   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:02.584087   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:59.352485   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:01.850334   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:59.717029   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:02.208354   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:03.494326   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:05.494833   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:05.124345   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:05.143530   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:05.143594   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:05.195985   67607 cri.go:89] found id: ""
	I0829 20:30:05.196014   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.196024   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:05.196032   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:05.196092   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:05.254315   67607 cri.go:89] found id: ""
	I0829 20:30:05.254343   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.254354   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:05.254362   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:05.254432   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:05.306756   67607 cri.go:89] found id: ""
	I0829 20:30:05.306781   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.306788   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:05.306794   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:05.306852   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:05.345200   67607 cri.go:89] found id: ""
	I0829 20:30:05.345225   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.345235   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:05.345242   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:05.345297   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:05.384038   67607 cri.go:89] found id: ""
	I0829 20:30:05.384064   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.384074   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:05.384081   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:05.384140   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:05.420177   67607 cri.go:89] found id: ""
	I0829 20:30:05.420201   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.420208   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:05.420214   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:05.420260   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:05.453492   67607 cri.go:89] found id: ""
	I0829 20:30:05.453513   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.453521   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:05.453526   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:05.453573   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:05.491591   67607 cri.go:89] found id: ""
	I0829 20:30:05.491618   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.491628   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:05.491638   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:05.491701   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:05.580458   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:05.580503   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:05.620137   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:05.620169   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:05.672137   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:05.672177   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:05.685946   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:05.685973   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:05.755176   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:08.256255   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:08.269099   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:08.269160   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:08.302552   67607 cri.go:89] found id: ""
	I0829 20:30:08.302578   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.302585   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:08.302591   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:08.302639   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:08.340683   67607 cri.go:89] found id: ""
	I0829 20:30:08.340711   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.340718   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:08.340726   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:08.340778   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:08.387389   67607 cri.go:89] found id: ""
	I0829 20:30:08.387416   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.387424   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:08.387430   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:08.387477   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:08.421303   67607 cri.go:89] found id: ""
	I0829 20:30:08.421330   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.421340   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:08.421348   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:08.421409   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:08.458648   67607 cri.go:89] found id: ""
	I0829 20:30:08.458677   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.458688   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:08.458695   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:08.458758   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:08.498748   67607 cri.go:89] found id: ""
	I0829 20:30:08.498776   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.498784   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:08.498790   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:08.498845   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:08.536859   67607 cri.go:89] found id: ""
	I0829 20:30:08.536889   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.536896   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:08.536902   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:08.536963   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:08.570685   67607 cri.go:89] found id: ""
	I0829 20:30:08.570713   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.570723   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:08.570734   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:08.570748   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:08.621904   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:08.621938   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:08.636367   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:08.636391   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:08.703796   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:08.703824   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:08.703838   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:08.785084   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:08.785120   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:04.350230   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:06.849598   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:08.850961   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:04.708012   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:07.206604   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:09.207368   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:07.993015   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:09.994043   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:11.326633   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:11.339570   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:11.339637   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:11.374132   67607 cri.go:89] found id: ""
	I0829 20:30:11.374155   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.374163   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:11.374169   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:11.374234   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:11.409004   67607 cri.go:89] found id: ""
	I0829 20:30:11.409036   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.409047   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:11.409054   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:11.409119   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:11.444598   67607 cri.go:89] found id: ""
	I0829 20:30:11.444625   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.444635   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:11.444643   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:11.444704   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:11.481912   67607 cri.go:89] found id: ""
	I0829 20:30:11.481942   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.481953   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:11.481961   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:11.482025   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:11.516436   67607 cri.go:89] found id: ""
	I0829 20:30:11.516466   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.516477   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:11.516483   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:11.516536   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:11.554762   67607 cri.go:89] found id: ""
	I0829 20:30:11.554787   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.554795   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:11.554801   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:11.554857   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:11.588902   67607 cri.go:89] found id: ""
	I0829 20:30:11.588931   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.588942   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:11.588950   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:11.589011   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:11.621346   67607 cri.go:89] found id: ""
	I0829 20:30:11.621368   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.621376   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:11.621383   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:11.621395   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:11.659671   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:11.659703   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:11.711288   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:11.711315   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:11.725285   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:11.725310   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:11.801713   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:11.801735   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:11.801750   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:10.851075   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:13.349510   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:11.208203   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:13.706599   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:12.494548   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:14.993188   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:14.382313   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:14.395852   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:14.395926   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:14.438735   67607 cri.go:89] found id: ""
	I0829 20:30:14.438762   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.438772   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:14.438778   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:14.438840   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:14.477886   67607 cri.go:89] found id: ""
	I0829 20:30:14.477928   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.477937   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:14.477943   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:14.478000   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:14.517627   67607 cri.go:89] found id: ""
	I0829 20:30:14.517654   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.517664   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:14.517670   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:14.517734   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:14.557247   67607 cri.go:89] found id: ""
	I0829 20:30:14.557272   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.557280   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:14.557286   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:14.557345   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:14.591364   67607 cri.go:89] found id: ""
	I0829 20:30:14.591388   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.591398   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:14.591406   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:14.591468   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:14.627517   67607 cri.go:89] found id: ""
	I0829 20:30:14.627539   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.627546   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:14.627551   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:14.627604   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:14.662388   67607 cri.go:89] found id: ""
	I0829 20:30:14.662409   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.662419   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:14.662432   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:14.662488   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:14.695277   67607 cri.go:89] found id: ""
	I0829 20:30:14.695307   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.695316   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:14.695324   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:14.695335   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:14.735824   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:14.735852   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:14.792607   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:14.792642   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:14.808881   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:14.808910   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:14.879804   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:14.879824   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:14.879837   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:17.459817   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:17.474813   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:17.474887   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:17.509885   67607 cri.go:89] found id: ""
	I0829 20:30:17.509913   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.509923   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:17.509930   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:17.509987   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:17.543931   67607 cri.go:89] found id: ""
	I0829 20:30:17.543959   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.543968   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:17.543973   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:17.544021   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:17.580944   67607 cri.go:89] found id: ""
	I0829 20:30:17.580972   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.580980   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:17.580986   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:17.581033   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:17.620061   67607 cri.go:89] found id: ""
	I0829 20:30:17.620088   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.620097   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:17.620103   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:17.620148   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:17.658675   67607 cri.go:89] found id: ""
	I0829 20:30:17.658706   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.658717   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:17.658724   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:17.658788   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:17.694424   67607 cri.go:89] found id: ""
	I0829 20:30:17.694453   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.694462   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:17.694467   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:17.694571   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:17.727425   67607 cri.go:89] found id: ""
	I0829 20:30:17.727450   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.727456   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:17.727462   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:17.727510   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:17.767915   67607 cri.go:89] found id: ""
	I0829 20:30:17.767946   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.767956   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:17.767965   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:17.767977   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:17.837556   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:17.837580   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:17.837593   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:17.921601   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:17.921638   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:17.960999   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:17.961026   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:18.013654   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:18.013691   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:15.351372   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:17.850896   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:16.206810   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:18.207702   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:16.993566   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:18.997786   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:21.493705   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:20.528244   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:20.542116   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:20.542190   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:20.578905   67607 cri.go:89] found id: ""
	I0829 20:30:20.578936   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.578947   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:20.578954   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:20.579003   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:20.613543   67607 cri.go:89] found id: ""
	I0829 20:30:20.613567   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.613574   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:20.613579   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:20.613627   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:20.649322   67607 cri.go:89] found id: ""
	I0829 20:30:20.649344   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.649352   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:20.649366   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:20.649429   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:20.684851   67607 cri.go:89] found id: ""
	I0829 20:30:20.684878   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.684886   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:20.684892   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:20.684950   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:20.722016   67607 cri.go:89] found id: ""
	I0829 20:30:20.722045   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.722054   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:20.722062   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:20.722125   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:20.757594   67607 cri.go:89] found id: ""
	I0829 20:30:20.757626   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.757637   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:20.757644   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:20.757707   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:20.793694   67607 cri.go:89] found id: ""
	I0829 20:30:20.793728   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.793738   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:20.793746   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:20.793812   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:20.829709   67607 cri.go:89] found id: ""
	I0829 20:30:20.829736   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.829747   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:20.829758   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:20.829782   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:20.888838   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:20.888888   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:20.903530   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:20.903556   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:20.972460   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:20.972488   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:20.972503   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:21.055556   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:21.055593   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:23.597355   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:23.611091   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:23.611162   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:23.649469   67607 cri.go:89] found id: ""
	I0829 20:30:23.649493   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.649501   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:23.649510   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:23.649562   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:23.684530   67607 cri.go:89] found id: ""
	I0829 20:30:23.684554   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.684561   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:23.684571   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:23.684625   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:23.720466   67607 cri.go:89] found id: ""
	I0829 20:30:23.720493   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.720503   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:23.720510   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:23.720563   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:23.755013   67607 cri.go:89] found id: ""
	I0829 20:30:23.755042   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.755053   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:23.755061   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:23.755127   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:23.795212   67607 cri.go:89] found id: ""
	I0829 20:30:23.795243   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.795254   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:23.795263   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:23.795320   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:20.349781   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:22.350157   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:20.707723   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:23.206214   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:23.994457   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:26.493771   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:23.832912   67607 cri.go:89] found id: ""
	I0829 20:30:23.832941   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.832951   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:23.832959   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:23.833015   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:23.869896   67607 cri.go:89] found id: ""
	I0829 20:30:23.869930   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.869939   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:23.869947   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:23.870011   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:23.908111   67607 cri.go:89] found id: ""
	I0829 20:30:23.908136   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.908145   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:23.908155   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:23.908170   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:23.988489   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:23.988510   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:23.988525   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:24.063246   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:24.063280   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:24.102943   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:24.102974   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:24.157255   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:24.157294   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:26.671966   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:26.684755   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:26.684830   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:26.721125   67607 cri.go:89] found id: ""
	I0829 20:30:26.721150   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.721158   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:26.721164   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:26.721219   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:26.756328   67607 cri.go:89] found id: ""
	I0829 20:30:26.756349   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.756356   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:26.756362   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:26.756420   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:26.791711   67607 cri.go:89] found id: ""
	I0829 20:30:26.791751   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.791763   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:26.791774   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:26.791857   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:26.827215   67607 cri.go:89] found id: ""
	I0829 20:30:26.827244   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.827254   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:26.827261   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:26.827321   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:26.863461   67607 cri.go:89] found id: ""
	I0829 20:30:26.863486   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.863497   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:26.863505   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:26.863569   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:26.900037   67607 cri.go:89] found id: ""
	I0829 20:30:26.900065   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.900075   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:26.900083   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:26.900139   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:26.937236   67607 cri.go:89] found id: ""
	I0829 20:30:26.937263   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.937274   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:26.937282   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:26.937340   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:26.970281   67607 cri.go:89] found id: ""
	I0829 20:30:26.970312   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.970322   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:26.970332   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:26.970345   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:27.041485   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:27.041511   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:27.041526   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:27.120774   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:27.120807   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:27.159656   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:27.159685   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:27.213322   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:27.213356   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:24.350464   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:26.351419   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:28.850079   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:25.207838   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:27.708107   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:28.993552   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:31.494259   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:29.729066   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:29.742044   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:29.742099   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:29.777426   67607 cri.go:89] found id: ""
	I0829 20:30:29.777454   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.777462   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:29.777468   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:29.777529   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:29.814353   67607 cri.go:89] found id: ""
	I0829 20:30:29.814381   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.814392   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:29.814401   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:29.814462   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:29.853754   67607 cri.go:89] found id: ""
	I0829 20:30:29.853783   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.853793   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:29.853801   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:29.853869   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:29.893966   67607 cri.go:89] found id: ""
	I0829 20:30:29.893991   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.893998   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:29.894003   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:29.894057   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:29.929452   67607 cri.go:89] found id: ""
	I0829 20:30:29.929483   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.929492   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:29.929502   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:29.929561   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:29.965880   67607 cri.go:89] found id: ""
	I0829 20:30:29.965906   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.965916   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:29.965924   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:29.965986   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:30.002192   67607 cri.go:89] found id: ""
	I0829 20:30:30.002226   67607 logs.go:276] 0 containers: []
	W0829 20:30:30.002237   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:30.002245   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:30.002320   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:30.037603   67607 cri.go:89] found id: ""
	I0829 20:30:30.037640   67607 logs.go:276] 0 containers: []
	W0829 20:30:30.037651   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:30.037662   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:30.037677   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:30.094128   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:30.094168   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:30.110667   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:30.110701   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:30.188355   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:30.188375   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:30.188388   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:30.270750   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:30.270785   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:32.809472   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:32.823099   67607 kubeadm.go:597] duration metric: took 4m3.15684598s to restartPrimaryControlPlane
	W0829 20:30:32.823188   67607 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 20:30:32.823224   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:30:33.322987   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:30:33.338134   67607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:30:33.348586   67607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:30:33.358672   67607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:30:33.358692   67607 kubeadm.go:157] found existing configuration files:
	
	I0829 20:30:33.358748   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:30:33.367955   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:30:33.368000   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:30:33.377565   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:30:33.386317   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:30:33.386377   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:30:33.396356   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:30:33.406228   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:30:33.406281   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:30:33.418323   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:30:33.427595   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:30:33.427657   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:30:33.437520   67607 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:30:33.511159   67607 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 20:30:33.511279   67607 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:30:33.669988   67607 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:30:33.670133   67607 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:30:33.670267   67607 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 20:30:33.859908   67607 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:30:30.850893   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:32.851574   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:30.207012   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:32.206405   66989 pod_ready.go:82] duration metric: took 4m0.005864609s for pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace to be "Ready" ...
	E0829 20:30:32.206426   66989 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0829 20:30:32.206433   66989 pod_ready.go:39] duration metric: took 4m5.570928284s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:30:32.206448   66989 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:30:32.206482   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:32.206528   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:32.260213   66989 cri.go:89] found id: "f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:32.260242   66989 cri.go:89] found id: ""
	I0829 20:30:32.260252   66989 logs.go:276] 1 containers: [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313]
	I0829 20:30:32.260314   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.265201   66989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:32.265276   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:32.307620   66989 cri.go:89] found id: "5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:32.307648   66989 cri.go:89] found id: ""
	I0829 20:30:32.307656   66989 logs.go:276] 1 containers: [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6]
	I0829 20:30:32.307701   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.312372   66989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:32.312430   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:32.350059   66989 cri.go:89] found id: "64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:32.350092   66989 cri.go:89] found id: ""
	I0829 20:30:32.350102   66989 logs.go:276] 1 containers: [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71]
	I0829 20:30:32.350158   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.354624   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:32.354681   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:32.393968   66989 cri.go:89] found id: "daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:32.393988   66989 cri.go:89] found id: ""
	I0829 20:30:32.393995   66989 logs.go:276] 1 containers: [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334]
	I0829 20:30:32.394039   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.398674   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:32.398745   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:32.433038   66989 cri.go:89] found id: "05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:32.433064   66989 cri.go:89] found id: ""
	I0829 20:30:32.433074   66989 logs.go:276] 1 containers: [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f]
	I0829 20:30:32.433118   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.436969   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:32.437028   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:32.472768   66989 cri.go:89] found id: "29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:32.472786   66989 cri.go:89] found id: ""
	I0829 20:30:32.472793   66989 logs.go:276] 1 containers: [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd]
	I0829 20:30:32.472842   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.477466   66989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:32.477536   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:32.514464   66989 cri.go:89] found id: ""
	I0829 20:30:32.514492   66989 logs.go:276] 0 containers: []
	W0829 20:30:32.514502   66989 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:32.514509   66989 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0829 20:30:32.514591   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0829 20:30:32.551429   66989 cri.go:89] found id: "668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:32.551452   66989 cri.go:89] found id: "585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:32.551456   66989 cri.go:89] found id: ""
	I0829 20:30:32.551463   66989 logs.go:276] 2 containers: [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523]
	I0829 20:30:32.551508   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.555697   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.559864   66989 logs.go:123] Gathering logs for kube-apiserver [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313] ...
	I0829 20:30:32.559883   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:32.609776   66989 logs.go:123] Gathering logs for coredns [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71] ...
	I0829 20:30:32.609803   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:32.648419   66989 logs.go:123] Gathering logs for kube-scheduler [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334] ...
	I0829 20:30:32.648446   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:32.685938   66989 logs.go:123] Gathering logs for storage-provisioner [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c] ...
	I0829 20:30:32.685969   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:32.728665   66989 logs.go:123] Gathering logs for container status ...
	I0829 20:30:32.728693   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:32.770030   66989 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:32.770068   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 20:30:32.907821   66989 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:32.907850   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:32.923119   66989 logs.go:123] Gathering logs for etcd [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6] ...
	I0829 20:30:32.923149   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:32.979819   66989 logs.go:123] Gathering logs for kube-proxy [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f] ...
	I0829 20:30:32.979853   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:33.020472   66989 logs.go:123] Gathering logs for kube-controller-manager [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd] ...
	I0829 20:30:33.020496   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:33.074802   66989 logs.go:123] Gathering logs for storage-provisioner [585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523] ...
	I0829 20:30:33.074838   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:33.112043   66989 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:33.112072   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:33.624274   66989 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:33.624316   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:33.861742   67607 out.go:235]   - Generating certificates and keys ...
	I0829 20:30:33.861849   67607 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:30:33.861946   67607 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:30:33.862075   67607 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:30:33.862174   67607 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:30:33.862276   67607 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:30:33.862366   67607 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:30:33.862467   67607 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:30:33.862573   67607 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:30:33.862794   67607 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:30:33.863226   67607 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:30:33.863323   67607 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:30:33.863417   67607 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:30:34.065914   67607 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:30:34.235581   67607 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:30:34.660452   67607 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:30:34.724718   67607 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:30:34.743897   67607 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:30:34.746263   67607 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:30:34.746369   67607 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:30:34.893824   67607 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:30:33.494825   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:35.994300   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:34.895805   67607 out.go:235]   - Booting up control plane ...
	I0829 20:30:34.895941   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:30:34.904294   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:30:34.915103   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:30:34.915744   67607 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:30:34.917923   67607 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 20:30:35.351975   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:37.352013   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:36.202184   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:36.218838   66989 api_server.go:72] duration metric: took 4m17.334186395s to wait for apiserver process to appear ...
	I0829 20:30:36.218870   66989 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:30:36.218910   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:36.218963   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:36.263205   66989 cri.go:89] found id: "f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:36.263233   66989 cri.go:89] found id: ""
	I0829 20:30:36.263243   66989 logs.go:276] 1 containers: [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313]
	I0829 20:30:36.263292   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.267466   66989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:36.267522   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:36.303894   66989 cri.go:89] found id: "5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:36.303930   66989 cri.go:89] found id: ""
	I0829 20:30:36.303938   66989 logs.go:276] 1 containers: [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6]
	I0829 20:30:36.303996   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.308089   66989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:36.308170   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:36.347320   66989 cri.go:89] found id: "64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:36.347392   66989 cri.go:89] found id: ""
	I0829 20:30:36.347414   66989 logs.go:276] 1 containers: [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71]
	I0829 20:30:36.347485   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.352121   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:36.352174   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:36.389760   66989 cri.go:89] found id: "daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:36.389784   66989 cri.go:89] found id: ""
	I0829 20:30:36.389793   66989 logs.go:276] 1 containers: [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334]
	I0829 20:30:36.389853   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.394860   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:36.394919   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:36.430562   66989 cri.go:89] found id: "05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:36.430587   66989 cri.go:89] found id: ""
	I0829 20:30:36.430597   66989 logs.go:276] 1 containers: [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f]
	I0829 20:30:36.430655   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.435151   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:36.435226   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:36.470714   66989 cri.go:89] found id: "29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:36.470742   66989 cri.go:89] found id: ""
	I0829 20:30:36.470750   66989 logs.go:276] 1 containers: [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd]
	I0829 20:30:36.470816   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.475382   66989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:36.475446   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:36.514853   66989 cri.go:89] found id: ""
	I0829 20:30:36.514888   66989 logs.go:276] 0 containers: []
	W0829 20:30:36.514898   66989 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:36.514910   66989 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0829 20:30:36.514971   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0829 20:30:36.548229   66989 cri.go:89] found id: "668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:36.548252   66989 cri.go:89] found id: "585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:36.548256   66989 cri.go:89] found id: ""
	I0829 20:30:36.548263   66989 logs.go:276] 2 containers: [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523]
	I0829 20:30:36.548314   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.552484   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.556661   66989 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:36.556681   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:36.622985   66989 logs.go:123] Gathering logs for etcd [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6] ...
	I0829 20:30:36.623019   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:36.678770   66989 logs.go:123] Gathering logs for kube-controller-manager [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd] ...
	I0829 20:30:36.678799   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:36.731822   66989 logs.go:123] Gathering logs for storage-provisioner [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c] ...
	I0829 20:30:36.731849   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:36.768451   66989 logs.go:123] Gathering logs for storage-provisioner [585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523] ...
	I0829 20:30:36.768482   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:36.803818   66989 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:36.803846   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:37.225805   66989 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:37.225849   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:37.245421   66989 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:37.245458   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 20:30:37.358238   66989 logs.go:123] Gathering logs for kube-apiserver [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313] ...
	I0829 20:30:37.358266   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:37.401876   66989 logs.go:123] Gathering logs for coredns [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71] ...
	I0829 20:30:37.401913   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:37.438189   66989 logs.go:123] Gathering logs for kube-scheduler [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334] ...
	I0829 20:30:37.438223   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:37.475404   66989 logs.go:123] Gathering logs for kube-proxy [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f] ...
	I0829 20:30:37.475433   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:37.511876   66989 logs.go:123] Gathering logs for container status ...
	I0829 20:30:37.511903   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:38.493604   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:40.494396   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:40.054097   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:30:40.058474   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0829 20:30:40.059830   66989 api_server.go:141] control plane version: v1.31.0
	I0829 20:30:40.059850   66989 api_server.go:131] duration metric: took 3.840972907s to wait for apiserver health ...
	I0829 20:30:40.059857   66989 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:30:40.059877   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:40.059924   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:40.101978   66989 cri.go:89] found id: "f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:40.102003   66989 cri.go:89] found id: ""
	I0829 20:30:40.102013   66989 logs.go:276] 1 containers: [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313]
	I0829 20:30:40.102073   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.107429   66989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:40.107496   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:40.145052   66989 cri.go:89] found id: "5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:40.145078   66989 cri.go:89] found id: ""
	I0829 20:30:40.145086   66989 logs.go:276] 1 containers: [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6]
	I0829 20:30:40.145133   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.149329   66989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:40.149394   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:40.187740   66989 cri.go:89] found id: "64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:40.187769   66989 cri.go:89] found id: ""
	I0829 20:30:40.187778   66989 logs.go:276] 1 containers: [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71]
	I0829 20:30:40.187838   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.192085   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:40.192156   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:40.231992   66989 cri.go:89] found id: "daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:40.232010   66989 cri.go:89] found id: ""
	I0829 20:30:40.232017   66989 logs.go:276] 1 containers: [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334]
	I0829 20:30:40.232060   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.236275   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:40.236333   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:40.279637   66989 cri.go:89] found id: "05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:40.279660   66989 cri.go:89] found id: ""
	I0829 20:30:40.279669   66989 logs.go:276] 1 containers: [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f]
	I0829 20:30:40.279727   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.288800   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:40.288876   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:40.341222   66989 cri.go:89] found id: "29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:40.341248   66989 cri.go:89] found id: ""
	I0829 20:30:40.341258   66989 logs.go:276] 1 containers: [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd]
	I0829 20:30:40.341322   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.346013   66989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:40.346088   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:40.383801   66989 cri.go:89] found id: ""
	I0829 20:30:40.383828   66989 logs.go:276] 0 containers: []
	W0829 20:30:40.383836   66989 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:40.383842   66989 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0829 20:30:40.383896   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0829 20:30:40.421847   66989 cri.go:89] found id: "668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:40.421874   66989 cri.go:89] found id: "585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:40.421879   66989 cri.go:89] found id: ""
	I0829 20:30:40.421889   66989 logs.go:276] 2 containers: [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523]
	I0829 20:30:40.421950   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.426229   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.429902   66989 logs.go:123] Gathering logs for storage-provisioner [585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523] ...
	I0829 20:30:40.429931   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:40.471015   66989 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:40.471039   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:40.831575   66989 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:40.831612   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:40.846195   66989 logs.go:123] Gathering logs for etcd [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6] ...
	I0829 20:30:40.846230   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:40.905469   66989 logs.go:123] Gathering logs for kube-scheduler [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334] ...
	I0829 20:30:40.905507   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:40.952303   66989 logs.go:123] Gathering logs for kube-proxy [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f] ...
	I0829 20:30:40.952337   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:41.001278   66989 logs.go:123] Gathering logs for kube-controller-manager [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd] ...
	I0829 20:30:41.001309   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:41.071045   66989 logs.go:123] Gathering logs for container status ...
	I0829 20:30:41.071089   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:41.120024   66989 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:41.120050   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:41.191412   66989 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:41.191445   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 20:30:41.321848   66989 logs.go:123] Gathering logs for kube-apiserver [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313] ...
	I0829 20:30:41.321874   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:41.370807   66989 logs.go:123] Gathering logs for coredns [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71] ...
	I0829 20:30:41.370833   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:41.405913   66989 logs.go:123] Gathering logs for storage-provisioner [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c] ...
	I0829 20:30:41.405939   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:43.948957   66989 system_pods.go:59] 8 kube-system pods found
	I0829 20:30:43.948987   66989 system_pods.go:61] "coredns-6f6b679f8f-dg6t6" [92e89b20-ebf4-4738-8ca7-9dc2a0e5653a] Running
	I0829 20:30:43.948992   66989 system_pods.go:61] "etcd-embed-certs-388383" [a688325a-9ed2-488d-a1a1-aa440e37fa9f] Running
	I0829 20:30:43.948996   66989 system_pods.go:61] "kube-apiserver-embed-certs-388383" [7a1b715b-87a3-44e0-868d-a3184f5b9f61] Running
	I0829 20:30:43.948999   66989 system_pods.go:61] "kube-controller-manager-embed-certs-388383" [9d942083-4d39-448c-8151-424ea9d5e6af] Running
	I0829 20:30:43.949003   66989 system_pods.go:61] "kube-proxy-fcxs4" [649b40c8-4f4b-40d1-8179-baf378d4c7d7] Running
	I0829 20:30:43.949006   66989 system_pods.go:61] "kube-scheduler-embed-certs-388383" [87b73013-dfad-411d-aaa9-f2c0e39fb920] Running
	I0829 20:30:43.949011   66989 system_pods.go:61] "metrics-server-6867b74b74-mx5jh" [99e21acd-b7b8-4e6f-8c75-c112206aed89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:30:43.949015   66989 system_pods.go:61] "storage-provisioner" [021ca156-b7a8-4647-8efe-db17968fd5a8] Running
	I0829 20:30:43.949022   66989 system_pods.go:74] duration metric: took 3.889159839s to wait for pod list to return data ...
	I0829 20:30:43.949028   66989 default_sa.go:34] waiting for default service account to be created ...
	I0829 20:30:43.951906   66989 default_sa.go:45] found service account: "default"
	I0829 20:30:43.951932   66989 default_sa.go:55] duration metric: took 2.897769ms for default service account to be created ...
	I0829 20:30:43.951943   66989 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 20:30:43.959246   66989 system_pods.go:86] 8 kube-system pods found
	I0829 20:30:43.959269   66989 system_pods.go:89] "coredns-6f6b679f8f-dg6t6" [92e89b20-ebf4-4738-8ca7-9dc2a0e5653a] Running
	I0829 20:30:43.959275   66989 system_pods.go:89] "etcd-embed-certs-388383" [a688325a-9ed2-488d-a1a1-aa440e37fa9f] Running
	I0829 20:30:43.959279   66989 system_pods.go:89] "kube-apiserver-embed-certs-388383" [7a1b715b-87a3-44e0-868d-a3184f5b9f61] Running
	I0829 20:30:43.959283   66989 system_pods.go:89] "kube-controller-manager-embed-certs-388383" [9d942083-4d39-448c-8151-424ea9d5e6af] Running
	I0829 20:30:43.959286   66989 system_pods.go:89] "kube-proxy-fcxs4" [649b40c8-4f4b-40d1-8179-baf378d4c7d7] Running
	I0829 20:30:43.959290   66989 system_pods.go:89] "kube-scheduler-embed-certs-388383" [87b73013-dfad-411d-aaa9-f2c0e39fb920] Running
	I0829 20:30:43.959296   66989 system_pods.go:89] "metrics-server-6867b74b74-mx5jh" [99e21acd-b7b8-4e6f-8c75-c112206aed89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:30:43.959302   66989 system_pods.go:89] "storage-provisioner" [021ca156-b7a8-4647-8efe-db17968fd5a8] Running
	I0829 20:30:43.959309   66989 system_pods.go:126] duration metric: took 7.361244ms to wait for k8s-apps to be running ...
	I0829 20:30:43.959318   66989 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 20:30:43.959356   66989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:30:43.976136   66989 system_svc.go:56] duration metric: took 16.811475ms WaitForService to wait for kubelet
	I0829 20:30:43.976167   66989 kubeadm.go:582] duration metric: took 4m25.091518378s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:30:43.976193   66989 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:30:43.979345   66989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:30:43.979376   66989 node_conditions.go:123] node cpu capacity is 2
	I0829 20:30:43.979386   66989 node_conditions.go:105] duration metric: took 3.187489ms to run NodePressure ...
	I0829 20:30:43.979396   66989 start.go:241] waiting for startup goroutines ...
	I0829 20:30:43.979402   66989 start.go:246] waiting for cluster config update ...
	I0829 20:30:43.979414   66989 start.go:255] writing updated cluster config ...
	I0829 20:30:43.979729   66989 ssh_runner.go:195] Run: rm -f paused
	I0829 20:30:44.028715   66989 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 20:30:44.030675   66989 out.go:177] * Done! kubectl is now configured to use "embed-certs-388383" cluster and "default" namespace by default
	I0829 20:30:39.850811   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:41.850941   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:42.993711   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:45.492729   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:44.351171   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:46.849842   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:48.851125   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:47.494031   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:49.993291   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:51.350926   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:53.850966   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:52.494604   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:54.994054   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:56.350237   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:58.856068   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:56.994483   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:59.494879   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:01.351293   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:03.850415   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:01.994470   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:04.493393   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:05.851663   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:08.350513   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:06.988349   68084 pod_ready.go:82] duration metric: took 4m0.000994859s for pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace to be "Ready" ...
	E0829 20:31:06.988378   68084 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 20:31:06.988396   68084 pod_ready.go:39] duration metric: took 4m13.5587561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:31:06.988421   68084 kubeadm.go:597] duration metric: took 4m20.63419422s to restartPrimaryControlPlane
	W0829 20:31:06.988470   68084 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 20:31:06.988492   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:31:10.350782   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:12.851120   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:14.919490   67607 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 20:31:14.920124   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:14.920395   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:15.350794   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:17.351675   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:19.920740   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:19.920993   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:19.858714   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:22.351208   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:24.851679   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:27.351087   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:33.177614   68084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.189095849s)
	I0829 20:31:33.177712   68084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:31:33.202840   68084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:31:33.220648   68084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:31:33.239458   68084 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:31:33.239479   68084 kubeadm.go:157] found existing configuration files:
	
	I0829 20:31:33.239519   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0829 20:31:33.257831   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:31:33.257900   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:31:33.272621   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0829 20:31:33.287906   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:31:33.287975   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:31:33.302931   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0829 20:31:33.312359   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:31:33.312411   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:31:33.322850   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0829 20:31:33.332224   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:31:33.332280   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:31:33.342072   68084 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:31:33.388790   68084 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 20:31:33.388844   68084 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:31:33.506108   68084 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:31:33.506263   68084 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:31:33.506403   68084 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 20:31:33.515467   68084 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:31:29.921355   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:29.921591   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:29.351212   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:31.351683   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:33.850337   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:33.517487   68084 out.go:235]   - Generating certificates and keys ...
	I0829 20:31:33.517590   68084 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:31:33.517697   68084 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:31:33.517809   68084 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:31:33.517907   68084 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:31:33.518009   68084 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:31:33.518086   68084 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:31:33.518174   68084 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:31:33.518266   68084 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:31:33.518379   68084 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:31:33.518495   68084 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:31:33.518567   68084 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:31:33.518656   68084 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:31:33.888310   68084 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:31:34.000803   68084 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 20:31:34.103016   68084 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:31:34.461677   68084 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:31:34.617814   68084 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:31:34.618316   68084 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:31:34.622440   68084 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:31:34.624324   68084 out.go:235]   - Booting up control plane ...
	I0829 20:31:34.624428   68084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:31:34.624527   68084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:31:34.624882   68084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:31:34.647388   68084 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:31:34.653776   68084 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:31:34.653864   68084 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:31:34.795338   68084 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 20:31:34.795463   68084 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 20:31:35.797126   68084 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001854627s
	I0829 20:31:35.797253   68084 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 20:31:35.852495   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:37.344608   66841 pod_ready.go:82] duration metric: took 4m0.000461851s for pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace to be "Ready" ...
	E0829 20:31:37.344637   66841 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 20:31:37.344661   66841 pod_ready.go:39] duration metric: took 4m13.033970527s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:31:37.344693   66841 kubeadm.go:597] duration metric: took 4m20.095743839s to restartPrimaryControlPlane
	W0829 20:31:37.344752   66841 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 20:31:37.344780   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:31:40.799092   68084 kubeadm.go:310] [api-check] The API server is healthy after 5.002121632s
	I0829 20:31:40.813865   68084 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 20:31:40.829677   68084 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 20:31:40.870324   68084 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 20:31:40.870598   68084 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-145096 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 20:31:40.889024   68084 kubeadm.go:310] [bootstrap-token] Using token: gy9sl5.6oyya9sd2gbep67e
	I0829 20:31:40.890947   68084 out.go:235]   - Configuring RBAC rules ...
	I0829 20:31:40.891083   68084 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 20:31:40.898748   68084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 20:31:40.912914   68084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 20:31:40.916739   68084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 20:31:40.923995   68084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 20:31:40.930447   68084 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 20:31:41.206632   68084 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 20:31:41.679673   68084 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 20:31:42.206707   68084 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 20:31:42.206733   68084 kubeadm.go:310] 
	I0829 20:31:42.206819   68084 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 20:31:42.206830   68084 kubeadm.go:310] 
	I0829 20:31:42.206974   68084 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 20:31:42.206996   68084 kubeadm.go:310] 
	I0829 20:31:42.207018   68084 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 20:31:42.207073   68084 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 20:31:42.207120   68084 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 20:31:42.207127   68084 kubeadm.go:310] 
	I0829 20:31:42.207189   68084 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 20:31:42.207196   68084 kubeadm.go:310] 
	I0829 20:31:42.207234   68084 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 20:31:42.207238   68084 kubeadm.go:310] 
	I0829 20:31:42.207285   68084 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 20:31:42.207382   68084 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 20:31:42.207473   68084 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 20:31:42.207484   68084 kubeadm.go:310] 
	I0829 20:31:42.207611   68084 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 20:31:42.207727   68084 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 20:31:42.207736   68084 kubeadm.go:310] 
	I0829 20:31:42.207854   68084 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token gy9sl5.6oyya9sd2gbep67e \
	I0829 20:31:42.207962   68084 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef \
	I0829 20:31:42.207983   68084 kubeadm.go:310] 	--control-plane 
	I0829 20:31:42.207986   68084 kubeadm.go:310] 
	I0829 20:31:42.208087   68084 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 20:31:42.208106   68084 kubeadm.go:310] 
	I0829 20:31:42.208214   68084 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token gy9sl5.6oyya9sd2gbep67e \
	I0829 20:31:42.208342   68084 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef 
	I0829 20:31:42.209248   68084 kubeadm.go:310] W0829 20:31:33.349141    2513 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 20:31:42.209595   68084 kubeadm.go:310] W0829 20:31:33.349919    2513 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 20:31:42.209769   68084 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:31:42.209803   68084 cni.go:84] Creating CNI manager for ""
	I0829 20:31:42.209817   68084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:31:42.211545   68084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:31:42.212889   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:31:42.223984   68084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:31:42.242703   68084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 20:31:42.242779   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-145096 minikube.k8s.io/updated_at=2024_08_29T20_31_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033 minikube.k8s.io/name=default-k8s-diff-port-145096 minikube.k8s.io/primary=true
	I0829 20:31:42.242779   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:42.448824   68084 ops.go:34] apiserver oom_adj: -16
	I0829 20:31:42.453004   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:42.953891   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:43.453922   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:43.953465   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:44.453647   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:44.954035   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:45.453660   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:45.953536   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:46.046900   68084 kubeadm.go:1113] duration metric: took 3.804195127s to wait for elevateKubeSystemPrivileges
	I0829 20:31:46.046927   68084 kubeadm.go:394] duration metric: took 4m59.74590678s to StartCluster
	I0829 20:31:46.046947   68084 settings.go:142] acquiring lock: {Name:mka4cd5ddff5796cd0ca11509c181178f4f73529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:31:46.047046   68084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:31:46.048617   68084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:31:46.048876   68084 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 20:31:46.048979   68084 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 20:31:46.049063   68084 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-145096"
	I0829 20:31:46.049090   68084 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-145096"
	I0829 20:31:46.049090   68084 config.go:182] Loaded profile config "default-k8s-diff-port-145096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:31:46.049099   68084 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-145096"
	I0829 20:31:46.049136   68084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-145096"
	W0829 20:31:46.049143   68084 addons.go:243] addon storage-provisioner should already be in state true
	I0829 20:31:46.049174   68084 host.go:66] Checking if "default-k8s-diff-port-145096" exists ...
	I0829 20:31:46.049104   68084 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-145096"
	I0829 20:31:46.049264   68084 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-145096"
	W0829 20:31:46.049280   68084 addons.go:243] addon metrics-server should already be in state true
	I0829 20:31:46.049335   68084 host.go:66] Checking if "default-k8s-diff-port-145096" exists ...
	I0829 20:31:46.049569   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.049574   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.049595   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.049599   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.049698   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.049722   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.050441   68084 out.go:177] * Verifying Kubernetes components...
	I0829 20:31:46.052039   68084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:31:46.065735   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39367
	I0829 20:31:46.065909   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32931
	I0829 20:31:46.066241   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.066344   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.066900   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.066918   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.067024   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.067045   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.067438   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.067481   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.067665   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:31:46.067902   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.067931   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.069157   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41005
	I0829 20:31:46.070637   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.070757   68084 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-145096"
	W0829 20:31:46.070771   68084 addons.go:243] addon default-storageclass should already be in state true
	I0829 20:31:46.070803   68084 host.go:66] Checking if "default-k8s-diff-port-145096" exists ...
	I0829 20:31:46.071118   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.071124   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.071132   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.071155   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.071510   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.072052   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.072095   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.085524   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39387
	I0829 20:31:46.085987   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.086553   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.086576   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.086966   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.087138   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:31:46.087202   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43235
	I0829 20:31:46.087621   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.088358   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.088381   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.088708   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.088806   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:31:46.089193   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.089363   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.090878   68084 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:31:46.091571   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42413
	I0829 20:31:46.092208   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.092291   68084 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:31:46.092316   68084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 20:31:46.092337   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:31:46.092660   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.092687   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.093044   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.093230   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:31:46.095184   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:31:46.096265   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.096792   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:31:46.096821   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.097088   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:31:46.097274   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:31:46.097433   68084 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 20:31:46.097448   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:31:46.097645   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:31:46.098681   68084 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 20:31:46.098697   68084 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 20:31:46.098715   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:31:46.101604   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.101993   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:31:46.102014   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.102328   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:31:46.102529   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:31:46.102687   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:31:46.102847   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:31:46.108154   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32805
	I0829 20:31:46.108627   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.109111   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.109129   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.109446   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.109675   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:31:46.111174   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:31:46.111440   68084 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 20:31:46.111452   68084 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 20:31:46.111469   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:31:46.114302   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.114805   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:31:46.114832   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.114921   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:31:46.115102   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:31:46.115256   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:31:46.115400   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:31:46.277748   68084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:31:46.297001   68084 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-145096" to be "Ready" ...
	I0829 20:31:46.317473   68084 node_ready.go:49] node "default-k8s-diff-port-145096" has status "Ready":"True"
	I0829 20:31:46.317498   68084 node_ready.go:38] duration metric: took 20.469679ms for node "default-k8s-diff-port-145096" to be "Ready" ...
	I0829 20:31:46.317509   68084 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:31:46.332180   68084 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:46.393588   68084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:31:46.399404   68084 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 20:31:46.399428   68084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 20:31:46.453014   68084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 20:31:46.460100   68084 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 20:31:46.460126   68084 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 20:31:46.541980   68084 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:31:46.542002   68084 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 20:31:46.607148   68084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:31:47.296344   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.296370   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.296445   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.296471   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.296678   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.296722   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.296744   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.296764   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.298376   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:47.298379   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.298404   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.298412   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:47.298420   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.298436   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.298453   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.298464   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.298700   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.298726   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:47.298729   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.318720   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.318745   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.319031   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:47.319053   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.319069   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.870171   68084 pod_ready.go:93] pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:31:47.870198   68084 pod_ready.go:82] duration metric: took 1.537994965s for pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:47.870208   68084 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:48.057308   68084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.450120563s)
	I0829 20:31:48.057358   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:48.057371   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:48.057667   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:48.057722   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:48.057734   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:48.057747   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:48.057759   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:48.057989   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:48.058005   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:48.058021   68084 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-145096"
	I0829 20:31:48.059886   68084 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0829 20:31:48.061124   68084 addons.go:510] duration metric: took 2.012141801s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0829 20:31:48.875874   68084 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:31:48.875897   68084 pod_ready.go:82] duration metric: took 1.005682325s for pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:48.875912   68084 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:48.879828   68084 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:31:48.879846   68084 pod_ready.go:82] duration metric: took 3.928263ms for pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:48.879863   68084 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:50.886764   68084 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:49.922318   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:49.922554   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:52.887708   68084 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:55.387571   68084 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:55.886194   68084 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:31:55.886217   68084 pod_ready.go:82] duration metric: took 7.006347256s for pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:55.886225   68084 pod_ready.go:39] duration metric: took 9.568704494s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:31:55.886238   68084 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:31:55.886286   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:31:55.901604   68084 api_server.go:72] duration metric: took 9.852691692s to wait for apiserver process to appear ...
	I0829 20:31:55.901628   68084 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:31:55.901643   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:31:55.905564   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 200:
	ok
	I0829 20:31:55.906387   68084 api_server.go:141] control plane version: v1.31.0
	I0829 20:31:55.906406   68084 api_server.go:131] duration metric: took 4.772472ms to wait for apiserver health ...
	I0829 20:31:55.906413   68084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:31:55.911423   68084 system_pods.go:59] 9 kube-system pods found
	I0829 20:31:55.911444   68084 system_pods.go:61] "coredns-6f6b679f8f-l25kd" [86947930-0d47-407a-b876-b482596fbe8f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:31:55.911451   68084 system_pods.go:61] "coredns-6f6b679f8f-lnm92" [a6caefe0-e883-4460-87de-25ee97191e1a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:31:55.911458   68084 system_pods.go:61] "etcd-default-k8s-diff-port-145096" [caba3f17-6544-4fe0-8dd3-0dd95e8df8ce] Running
	I0829 20:31:55.911465   68084 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-145096" [9b1ca00a-613b-414f-81e9-601d53d43207] Running
	I0829 20:31:55.911470   68084 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-145096" [e7145779-85cf-458d-9870-6fda4853d29d] Running
	I0829 20:31:55.911479   68084 system_pods.go:61] "kube-proxy-ptswc" [96c01414-e8e8-4731-824b-11d636285fb3] Running
	I0829 20:31:55.911488   68084 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-145096" [0d2cc607-72ac-4417-8a7c-196bf3ec90d7] Running
	I0829 20:31:55.911495   68084 system_pods.go:61] "metrics-server-6867b74b74-6sdqg" [2c9efadb-89bb-4aa6-b0f0-ddcb3e931674] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:31:55.911503   68084 system_pods.go:61] "storage-provisioner" [81531989-d045-44fb-b1a1-0817af27c804] Running
	I0829 20:31:55.911512   68084 system_pods.go:74] duration metric: took 5.092824ms to wait for pod list to return data ...
	I0829 20:31:55.911523   68084 default_sa.go:34] waiting for default service account to be created ...
	I0829 20:31:55.913794   68084 default_sa.go:45] found service account: "default"
	I0829 20:31:55.913820   68084 default_sa.go:55] duration metric: took 2.286925ms for default service account to be created ...
	I0829 20:31:55.913830   68084 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 20:31:55.919628   68084 system_pods.go:86] 9 kube-system pods found
	I0829 20:31:55.919666   68084 system_pods.go:89] "coredns-6f6b679f8f-l25kd" [86947930-0d47-407a-b876-b482596fbe8f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:31:55.919677   68084 system_pods.go:89] "coredns-6f6b679f8f-lnm92" [a6caefe0-e883-4460-87de-25ee97191e1a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:31:55.919686   68084 system_pods.go:89] "etcd-default-k8s-diff-port-145096" [caba3f17-6544-4fe0-8dd3-0dd95e8df8ce] Running
	I0829 20:31:55.919693   68084 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-145096" [9b1ca00a-613b-414f-81e9-601d53d43207] Running
	I0829 20:31:55.919699   68084 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-145096" [e7145779-85cf-458d-9870-6fda4853d29d] Running
	I0829 20:31:55.919704   68084 system_pods.go:89] "kube-proxy-ptswc" [96c01414-e8e8-4731-824b-11d636285fb3] Running
	I0829 20:31:55.919710   68084 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-145096" [0d2cc607-72ac-4417-8a7c-196bf3ec90d7] Running
	I0829 20:31:55.919718   68084 system_pods.go:89] "metrics-server-6867b74b74-6sdqg" [2c9efadb-89bb-4aa6-b0f0-ddcb3e931674] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:31:55.919725   68084 system_pods.go:89] "storage-provisioner" [81531989-d045-44fb-b1a1-0817af27c804] Running
	I0829 20:31:55.919734   68084 system_pods.go:126] duration metric: took 5.897752ms to wait for k8s-apps to be running ...
	I0829 20:31:55.919745   68084 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 20:31:55.919800   68084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:31:55.935429   68084 system_svc.go:56] duration metric: took 15.676316ms WaitForService to wait for kubelet
	I0829 20:31:55.935460   68084 kubeadm.go:582] duration metric: took 9.886551311s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:31:55.935483   68084 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:31:55.938444   68084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:31:55.938466   68084 node_conditions.go:123] node cpu capacity is 2
	I0829 20:31:55.938476   68084 node_conditions.go:105] duration metric: took 2.988434ms to run NodePressure ...
	I0829 20:31:55.938486   68084 start.go:241] waiting for startup goroutines ...
	I0829 20:31:55.938493   68084 start.go:246] waiting for cluster config update ...
	I0829 20:31:55.938503   68084 start.go:255] writing updated cluster config ...
	I0829 20:31:55.938834   68084 ssh_runner.go:195] Run: rm -f paused
	I0829 20:31:55.987879   68084 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 20:31:55.989766   68084 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-145096" cluster and "default" namespace by default
	I0829 20:32:03.506190   66841 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.161387814s)
	I0829 20:32:03.506268   66841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:32:03.530660   66841 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:32:03.550784   66841 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:32:03.565054   66841 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:32:03.565085   66841 kubeadm.go:157] found existing configuration files:
	
	I0829 20:32:03.565131   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:32:03.586492   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:32:03.586577   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:32:03.605061   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:32:03.617990   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:32:03.618054   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:32:03.635587   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:32:03.645495   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:32:03.645559   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:32:03.655081   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:32:03.664640   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:32:03.664703   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:32:03.674097   66841 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:32:03.721087   66841 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 20:32:03.721155   66841 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:32:03.839829   66841 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:32:03.839985   66841 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:32:03.840079   66841 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 20:32:03.849047   66841 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:32:03.850883   66841 out.go:235]   - Generating certificates and keys ...
	I0829 20:32:03.850970   66841 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:32:03.851045   66841 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:32:03.851129   66841 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:32:03.851222   66841 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:32:03.851292   66841 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:32:03.851340   66841 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:32:03.851399   66841 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:32:03.851450   66841 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:32:03.851515   66841 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:32:03.851620   66841 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:32:03.851687   66841 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:32:03.851755   66841 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:32:03.968189   66841 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:32:04.253016   66841 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 20:32:04.341190   66841 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:32:04.491607   66841 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:32:04.616753   66841 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:32:04.617354   66841 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:32:04.619961   66841 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:32:04.621690   66841 out.go:235]   - Booting up control plane ...
	I0829 20:32:04.621799   66841 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:32:04.621910   66841 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:32:04.622021   66841 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:32:04.643758   66841 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:32:04.650541   66841 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:32:04.650612   66841 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:32:04.786596   66841 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 20:32:04.786755   66841 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 20:32:05.788381   66841 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001614523s
	I0829 20:32:05.788512   66841 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 20:32:10.789752   66841 kubeadm.go:310] [api-check] The API server is healthy after 5.001571241s
	I0829 20:32:10.803237   66841 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 20:32:10.822640   66841 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 20:32:10.845744   66841 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 20:32:10.846050   66841 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-397724 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 20:32:10.856315   66841 kubeadm.go:310] [bootstrap-token] Using token: 3k2s43.7gy6mzkt91kkied7
	I0829 20:32:10.857834   66841 out.go:235]   - Configuring RBAC rules ...
	I0829 20:32:10.857947   66841 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 20:32:10.867339   66841 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 20:32:10.876522   66841 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 20:32:10.879786   66841 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 20:32:10.885043   66841 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 20:32:10.892077   66841 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 20:32:11.196796   66841 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 20:32:11.630072   66841 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 20:32:12.200197   66841 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 20:32:12.200232   66841 kubeadm.go:310] 
	I0829 20:32:12.200314   66841 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 20:32:12.200326   66841 kubeadm.go:310] 
	I0829 20:32:12.200406   66841 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 20:32:12.200416   66841 kubeadm.go:310] 
	I0829 20:32:12.200450   66841 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 20:32:12.200536   66841 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 20:32:12.200606   66841 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 20:32:12.200616   66841 kubeadm.go:310] 
	I0829 20:32:12.200687   66841 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 20:32:12.200700   66841 kubeadm.go:310] 
	I0829 20:32:12.200744   66841 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 20:32:12.200750   66841 kubeadm.go:310] 
	I0829 20:32:12.200793   66841 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 20:32:12.200861   66841 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 20:32:12.200918   66841 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 20:32:12.200924   66841 kubeadm.go:310] 
	I0829 20:32:12.201048   66841 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 20:32:12.201144   66841 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 20:32:12.201152   66841 kubeadm.go:310] 
	I0829 20:32:12.201255   66841 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3k2s43.7gy6mzkt91kkied7 \
	I0829 20:32:12.201373   66841 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef \
	I0829 20:32:12.201400   66841 kubeadm.go:310] 	--control-plane 
	I0829 20:32:12.201411   66841 kubeadm.go:310] 
	I0829 20:32:12.201487   66841 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 20:32:12.201495   66841 kubeadm.go:310] 
	I0829 20:32:12.201574   66841 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3k2s43.7gy6mzkt91kkied7 \
	I0829 20:32:12.201710   66841 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef 
	I0829 20:32:12.202900   66841 kubeadm.go:310] W0829 20:32:03.691334    3057 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 20:32:12.203223   66841 kubeadm.go:310] W0829 20:32:03.692151    3057 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 20:32:12.203339   66841 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:32:12.203366   66841 cni.go:84] Creating CNI manager for ""
	I0829 20:32:12.203381   66841 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:32:12.205733   66841 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:32:12.206905   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:32:12.218121   66841 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:32:12.237885   66841 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 20:32:12.237989   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:12.238006   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-397724 minikube.k8s.io/updated_at=2024_08_29T20_32_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033 minikube.k8s.io/name=no-preload-397724 minikube.k8s.io/primary=true
	I0829 20:32:12.282191   66841 ops.go:34] apiserver oom_adj: -16
	I0829 20:32:12.430006   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:12.930327   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:13.430210   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:13.930065   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:14.430163   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:14.930189   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:15.430677   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:15.930670   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:16.430943   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:16.549095   66841 kubeadm.go:1113] duration metric: took 4.311165714s to wait for elevateKubeSystemPrivileges
	I0829 20:32:16.549136   66841 kubeadm.go:394] duration metric: took 4m59.355577107s to StartCluster
	I0829 20:32:16.549156   66841 settings.go:142] acquiring lock: {Name:mka4cd5ddff5796cd0ca11509c181178f4f73529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:32:16.549229   66841 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:32:16.550926   66841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:32:16.551141   66841 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 20:32:16.551202   66841 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 20:32:16.551291   66841 addons.go:69] Setting storage-provisioner=true in profile "no-preload-397724"
	I0829 20:32:16.551315   66841 addons.go:69] Setting default-storageclass=true in profile "no-preload-397724"
	I0829 20:32:16.551329   66841 config.go:182] Loaded profile config "no-preload-397724": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:32:16.551340   66841 addons.go:69] Setting metrics-server=true in profile "no-preload-397724"
	I0829 20:32:16.551389   66841 addons.go:234] Setting addon metrics-server=true in "no-preload-397724"
	W0829 20:32:16.551404   66841 addons.go:243] addon metrics-server should already be in state true
	I0829 20:32:16.551442   66841 host.go:66] Checking if "no-preload-397724" exists ...
	I0829 20:32:16.551360   66841 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-397724"
	I0829 20:32:16.551324   66841 addons.go:234] Setting addon storage-provisioner=true in "no-preload-397724"
	W0829 20:32:16.551673   66841 addons.go:243] addon storage-provisioner should already be in state true
	I0829 20:32:16.551705   66841 host.go:66] Checking if "no-preload-397724" exists ...
	I0829 20:32:16.551872   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.551873   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.551908   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.551929   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.552036   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.552065   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.552634   66841 out.go:177] * Verifying Kubernetes components...
	I0829 20:32:16.553973   66841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:32:16.567797   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43335
	I0829 20:32:16.568321   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.568884   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.568910   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.569328   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.569941   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.569978   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.573055   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40673
	I0829 20:32:16.573642   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36399
	I0829 20:32:16.573770   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.574303   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.574321   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.574394   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.574913   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.574933   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.574935   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.575471   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.575511   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.575724   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.575950   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:32:16.579912   66841 addons.go:234] Setting addon default-storageclass=true in "no-preload-397724"
	W0829 20:32:16.579932   66841 addons.go:243] addon default-storageclass should already be in state true
	I0829 20:32:16.579960   66841 host.go:66] Checking if "no-preload-397724" exists ...
	I0829 20:32:16.580281   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.580298   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.591264   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42469
	I0829 20:32:16.591442   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42753
	I0829 20:32:16.591777   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.591827   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.592275   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.592289   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.592289   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.592307   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.592702   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.592726   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.592881   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:32:16.592882   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:32:16.594494   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:32:16.594956   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:32:16.596431   66841 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:32:16.596433   66841 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 20:32:16.597503   66841 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 20:32:16.597524   66841 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 20:32:16.597547   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:32:16.597607   66841 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:32:16.597625   66841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 20:32:16.597641   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:32:16.598780   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32841
	I0829 20:32:16.599272   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.599915   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.599937   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.601210   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.601613   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.601965   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.602159   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:32:16.602190   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.602328   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.602867   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.602998   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:32:16.603188   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:32:16.603234   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:32:16.603287   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.603434   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:32:16.603487   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:32:16.603691   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:32:16.603708   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:32:16.603857   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:32:16.603977   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:32:16.619336   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37683
	I0829 20:32:16.619806   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.620269   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.620286   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.620604   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.620818   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:32:16.622348   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:32:16.622563   66841 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 20:32:16.622580   66841 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 20:32:16.622597   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:32:16.625203   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.625542   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:32:16.625570   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.625746   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:32:16.625934   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:32:16.626094   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:32:16.626266   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:32:16.787525   66841 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:32:16.817674   66841 node_ready.go:35] waiting up to 6m0s for node "no-preload-397724" to be "Ready" ...
	I0829 20:32:16.833992   66841 node_ready.go:49] node "no-preload-397724" has status "Ready":"True"
	I0829 20:32:16.834030   66841 node_ready.go:38] duration metric: took 16.322874ms for node "no-preload-397724" to be "Ready" ...
	I0829 20:32:16.834042   66841 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:32:16.843147   66841 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-crgtj" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:16.902589   66841 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 20:32:16.902613   66841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 20:32:16.902859   66841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 20:32:16.903193   66841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:32:16.922497   66841 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 20:32:16.922518   66841 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 20:32:16.966207   66841 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:32:16.966240   66841 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 20:32:17.004882   66841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:32:17.204576   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.204613   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.204968   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.204987   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:17.204995   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.204994   66841 main.go:141] libmachine: (no-preload-397724) DBG | Closing plugin on server side
	I0829 20:32:17.205002   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.205261   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.205278   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:17.211789   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.211811   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.212074   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.212089   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:17.212119   66841 main.go:141] libmachine: (no-preload-397724) DBG | Closing plugin on server side
	I0829 20:32:17.902866   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.902897   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.903218   66841 main.go:141] libmachine: (no-preload-397724) DBG | Closing plugin on server side
	I0829 20:32:17.903266   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.903278   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:17.903286   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.903296   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.903556   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.903572   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:18.344211   66841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.33928059s)
	I0829 20:32:18.344259   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:18.344274   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:18.344571   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:18.344589   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:18.344611   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:18.344626   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:18.344948   66841 main.go:141] libmachine: (no-preload-397724) DBG | Closing plugin on server side
	I0829 20:32:18.344980   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:18.345010   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:18.345025   66841 addons.go:475] Verifying addon metrics-server=true in "no-preload-397724"
	I0829 20:32:18.346919   66841 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0829 20:32:18.348704   66841 addons.go:510] duration metric: took 1.797503952s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0829 20:32:18.850832   66841 pod_ready.go:93] pod "coredns-6f6b679f8f-crgtj" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:18.850853   66841 pod_ready.go:82] duration metric: took 2.007683093s for pod "coredns-6f6b679f8f-crgtj" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:18.850862   66841 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dw2r7" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.357679   66841 pod_ready.go:93] pod "coredns-6f6b679f8f-dw2r7" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.357702   66841 pod_ready.go:82] duration metric: took 1.506832539s for pod "coredns-6f6b679f8f-dw2r7" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.357710   66841 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.361830   66841 pod_ready.go:93] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.361854   66841 pod_ready.go:82] duration metric: took 4.136801ms for pod "etcd-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.361865   66841 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.365719   66841 pod_ready.go:93] pod "kube-apiserver-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.365733   66841 pod_ready.go:82] duration metric: took 3.861894ms for pod "kube-apiserver-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.365741   66841 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.369596   66841 pod_ready.go:93] pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.369611   66841 pod_ready.go:82] duration metric: took 3.864669ms for pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.369619   66841 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f4x4j" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.447788   66841 pod_ready.go:93] pod "kube-proxy-f4x4j" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.447812   66841 pod_ready.go:82] duration metric: took 78.187574ms for pod "kube-proxy-f4x4j" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.447823   66841 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:22.049084   66841 pod_ready.go:93] pod "kube-scheduler-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:22.049105   66841 pod_ready.go:82] duration metric: took 1.601276793s for pod "kube-scheduler-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:22.049113   66841 pod_ready.go:39] duration metric: took 5.215058301s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:32:22.049125   66841 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:32:22.049172   66841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:32:22.066060   66841 api_server.go:72] duration metric: took 5.514888299s to wait for apiserver process to appear ...
	I0829 20:32:22.066086   66841 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:32:22.066109   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:32:22.072343   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 200:
	ok
	I0829 20:32:22.073798   66841 api_server.go:141] control plane version: v1.31.0
	I0829 20:32:22.073821   66841 api_server.go:131] duration metric: took 7.728095ms to wait for apiserver health ...
	I0829 20:32:22.073828   66841 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:32:22.252273   66841 system_pods.go:59] 9 kube-system pods found
	I0829 20:32:22.252302   66841 system_pods.go:61] "coredns-6f6b679f8f-crgtj" [c48571a8-18ae-4737-a05b-4a77736aee35] Running
	I0829 20:32:22.252309   66841 system_pods.go:61] "coredns-6f6b679f8f-dw2r7" [6edda799-e2d6-402b-b4cd-7e54b2b89ca5] Running
	I0829 20:32:22.252315   66841 system_pods.go:61] "etcd-no-preload-397724" [15473208-a76c-4bc5-810f-e78d59538493] Running
	I0829 20:32:22.252320   66841 system_pods.go:61] "kube-apiserver-no-preload-397724" [521c6041-888f-4145-aabb-54da7382953d] Running
	I0829 20:32:22.252325   66841 system_pods.go:61] "kube-controller-manager-no-preload-397724" [fd5afaf8-898d-4985-8efc-5628709a52cd] Running
	I0829 20:32:22.252329   66841 system_pods.go:61] "kube-proxy-f4x4j" [eb76dc5a-016a-416c-8880-f76fc2d2a9bb] Running
	I0829 20:32:22.252333   66841 system_pods.go:61] "kube-scheduler-no-preload-397724" [77d9e2de-ee8e-4cb2-a7f0-5d9b96bd9691] Running
	I0829 20:32:22.252342   66841 system_pods.go:61] "metrics-server-6867b74b74-nxdc5" [6061e81d-2f14-4c4a-9e0f-acb57dc9fb5a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:32:22.252348   66841 system_pods.go:61] "storage-provisioner" [8b6c02d6-7a39-4fea-80b4-4ba02904232c] Running
	I0829 20:32:22.252358   66841 system_pods.go:74] duration metric: took 178.523887ms to wait for pod list to return data ...
	I0829 20:32:22.252370   66841 default_sa.go:34] waiting for default service account to be created ...
	I0829 20:32:22.448475   66841 default_sa.go:45] found service account: "default"
	I0829 20:32:22.448499   66841 default_sa.go:55] duration metric: took 196.123693ms for default service account to be created ...
	I0829 20:32:22.448508   66841 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 20:32:22.650996   66841 system_pods.go:86] 9 kube-system pods found
	I0829 20:32:22.651023   66841 system_pods.go:89] "coredns-6f6b679f8f-crgtj" [c48571a8-18ae-4737-a05b-4a77736aee35] Running
	I0829 20:32:22.651029   66841 system_pods.go:89] "coredns-6f6b679f8f-dw2r7" [6edda799-e2d6-402b-b4cd-7e54b2b89ca5] Running
	I0829 20:32:22.651033   66841 system_pods.go:89] "etcd-no-preload-397724" [15473208-a76c-4bc5-810f-e78d59538493] Running
	I0829 20:32:22.651037   66841 system_pods.go:89] "kube-apiserver-no-preload-397724" [521c6041-888f-4145-aabb-54da7382953d] Running
	I0829 20:32:22.651042   66841 system_pods.go:89] "kube-controller-manager-no-preload-397724" [fd5afaf8-898d-4985-8efc-5628709a52cd] Running
	I0829 20:32:22.651045   66841 system_pods.go:89] "kube-proxy-f4x4j" [eb76dc5a-016a-416c-8880-f76fc2d2a9bb] Running
	I0829 20:32:22.651048   66841 system_pods.go:89] "kube-scheduler-no-preload-397724" [77d9e2de-ee8e-4cb2-a7f0-5d9b96bd9691] Running
	I0829 20:32:22.651054   66841 system_pods.go:89] "metrics-server-6867b74b74-nxdc5" [6061e81d-2f14-4c4a-9e0f-acb57dc9fb5a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:32:22.651058   66841 system_pods.go:89] "storage-provisioner" [8b6c02d6-7a39-4fea-80b4-4ba02904232c] Running
	I0829 20:32:22.651065   66841 system_pods.go:126] duration metric: took 202.552304ms to wait for k8s-apps to be running ...
	I0829 20:32:22.651071   66841 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 20:32:22.651111   66841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:32:22.666831   66841 system_svc.go:56] duration metric: took 15.753046ms WaitForService to wait for kubelet
	I0829 20:32:22.666863   66841 kubeadm.go:582] duration metric: took 6.115692499s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:32:22.666888   66841 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:32:22.848742   66841 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:32:22.848766   66841 node_conditions.go:123] node cpu capacity is 2
	I0829 20:32:22.848777   66841 node_conditions.go:105] duration metric: took 181.884368ms to run NodePressure ...
	I0829 20:32:22.848787   66841 start.go:241] waiting for startup goroutines ...
	I0829 20:32:22.848794   66841 start.go:246] waiting for cluster config update ...
	I0829 20:32:22.848803   66841 start.go:255] writing updated cluster config ...
	I0829 20:32:22.849030   66841 ssh_runner.go:195] Run: rm -f paused
	I0829 20:32:22.897503   66841 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 20:32:22.899404   66841 out.go:177] * Done! kubectl is now configured to use "no-preload-397724" cluster and "default" namespace by default
	I0829 20:32:29.924469   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:32:29.924707   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:32:29.924729   67607 kubeadm.go:310] 
	I0829 20:32:29.924801   67607 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 20:32:29.924855   67607 kubeadm.go:310] 		timed out waiting for the condition
	I0829 20:32:29.924865   67607 kubeadm.go:310] 
	I0829 20:32:29.924912   67607 kubeadm.go:310] 	This error is likely caused by:
	I0829 20:32:29.924960   67607 kubeadm.go:310] 		- The kubelet is not running
	I0829 20:32:29.925080   67607 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 20:32:29.925090   67607 kubeadm.go:310] 
	I0829 20:32:29.925207   67607 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 20:32:29.925256   67607 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 20:32:29.925316   67607 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 20:32:29.925342   67607 kubeadm.go:310] 
	I0829 20:32:29.925493   67607 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 20:32:29.925616   67607 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 20:32:29.925627   67607 kubeadm.go:310] 
	I0829 20:32:29.925776   67607 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 20:32:29.925909   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 20:32:29.926016   67607 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 20:32:29.926134   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 20:32:29.926154   67607 kubeadm.go:310] 
	I0829 20:32:29.926605   67607 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:32:29.926723   67607 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 20:32:29.926812   67607 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0829 20:32:29.926935   67607 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0829 20:32:29.926979   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:32:30.389951   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:32:30.408455   67607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:32:30.418493   67607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:32:30.418513   67607 kubeadm.go:157] found existing configuration files:
	
	I0829 20:32:30.418582   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:32:30.427909   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:32:30.427957   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:32:30.437122   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:32:30.446157   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:32:30.446203   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:32:30.455480   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:32:30.464781   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:32:30.464834   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:32:30.474607   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:32:30.484537   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:32:30.484601   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:32:30.494170   67607 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:32:30.717349   67607 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:34:26.784436   67607 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 20:34:26.784518   67607 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0829 20:34:26.786158   67607 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 20:34:26.786196   67607 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:34:26.786276   67607 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:34:26.786353   67607 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:34:26.786437   67607 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 20:34:26.786486   67607 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:34:26.788271   67607 out.go:235]   - Generating certificates and keys ...
	I0829 20:34:26.788380   67607 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:34:26.788453   67607 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:34:26.788523   67607 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:34:26.788593   67607 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:34:26.788665   67607 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:34:26.788714   67607 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:34:26.788769   67607 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:34:26.788826   67607 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:34:26.788894   67607 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:34:26.788961   67607 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:34:26.788993   67607 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:34:26.789044   67607 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:34:26.789084   67607 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:34:26.789143   67607 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:34:26.789228   67607 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:34:26.789312   67607 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:34:26.789441   67607 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:34:26.789577   67607 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:34:26.789647   67607 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:34:26.789717   67607 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:34:26.791166   67607 out.go:235]   - Booting up control plane ...
	I0829 20:34:26.791239   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:34:26.791305   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:34:26.791382   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:34:26.791465   67607 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:34:26.791597   67607 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 20:34:26.791658   67607 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 20:34:26.791736   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.791926   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792008   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.792182   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792254   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.792435   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792492   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.792725   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792798   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.793026   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.793043   67607 kubeadm.go:310] 
	I0829 20:34:26.793091   67607 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 20:34:26.793148   67607 kubeadm.go:310] 		timed out waiting for the condition
	I0829 20:34:26.793159   67607 kubeadm.go:310] 
	I0829 20:34:26.793188   67607 kubeadm.go:310] 	This error is likely caused by:
	I0829 20:34:26.793219   67607 kubeadm.go:310] 		- The kubelet is not running
	I0829 20:34:26.793305   67607 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 20:34:26.793314   67607 kubeadm.go:310] 
	I0829 20:34:26.793438   67607 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 20:34:26.793483   67607 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 20:34:26.793515   67607 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 20:34:26.793522   67607 kubeadm.go:310] 
	I0829 20:34:26.793618   67607 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 20:34:26.793735   67607 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 20:34:26.793748   67607 kubeadm.go:310] 
	I0829 20:34:26.793895   67607 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 20:34:26.794020   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 20:34:26.794125   67607 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 20:34:26.794227   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 20:34:26.794285   67607 kubeadm.go:310] 
	I0829 20:34:26.794300   67607 kubeadm.go:394] duration metric: took 7m57.183485424s to StartCluster
	I0829 20:34:26.794357   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:34:26.794410   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:34:26.837033   67607 cri.go:89] found id: ""
	I0829 20:34:26.837072   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.837083   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:34:26.837091   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:34:26.837153   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:34:26.871177   67607 cri.go:89] found id: ""
	I0829 20:34:26.871203   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.871213   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:34:26.871220   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:34:26.871280   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:34:26.905409   67607 cri.go:89] found id: ""
	I0829 20:34:26.905432   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.905442   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:34:26.905450   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:34:26.905509   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:34:26.940119   67607 cri.go:89] found id: ""
	I0829 20:34:26.940150   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.940161   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:34:26.940169   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:34:26.940217   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:34:26.974555   67607 cri.go:89] found id: ""
	I0829 20:34:26.974589   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.974601   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:34:26.974608   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:34:26.974674   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:34:27.010586   67607 cri.go:89] found id: ""
	I0829 20:34:27.010616   67607 logs.go:276] 0 containers: []
	W0829 20:34:27.010631   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:34:27.010639   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:34:27.010704   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:34:27.044867   67607 cri.go:89] found id: ""
	I0829 20:34:27.044900   67607 logs.go:276] 0 containers: []
	W0829 20:34:27.044913   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:34:27.044921   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:34:27.044979   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:34:27.079282   67607 cri.go:89] found id: ""
	I0829 20:34:27.079308   67607 logs.go:276] 0 containers: []
	W0829 20:34:27.079316   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:34:27.079323   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:34:27.079335   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:34:27.093455   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:34:27.093485   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:34:27.179256   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:34:27.179280   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:34:27.179292   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:34:27.305873   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:34:27.305906   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:34:27.349676   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:34:27.349702   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 20:34:27.399787   67607 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0829 20:34:27.399851   67607 out.go:270] * 
	W0829 20:34:27.399907   67607 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 20:34:27.399919   67607 out.go:270] * 
	W0829 20:34:27.400631   67607 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 20:34:27.403773   67607 out.go:201] 
	W0829 20:34:27.404902   67607 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 20:34:27.404953   67607 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0829 20:34:27.404981   67607 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0829 20:34:27.406310   67607 out.go:201] 
	
	
	==> CRI-O <==
	Aug 29 20:46:11 no-preload-397724 crio[709]: time="2024-08-29 20:46:11.958853537Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964371958830955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c0c345ec-ac9c-424b-aed6-396e26e97817 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:46:11 no-preload-397724 crio[709]: time="2024-08-29 20:46:11.959427936Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41749a71-655a-41da-a712-3dda95f73288 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:11 no-preload-397724 crio[709]: time="2024-08-29 20:46:11.959539863Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41749a71-655a-41da-a712-3dda95f73288 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:11 no-preload-397724 crio[709]: time="2024-08-29 20:46:11.959730154Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c24672f1f33fab0a363ffe9b15191c033aad911504fe4e0cbbb0c54723fc61d,PodSandboxId:c0f16c27a76d494a7575865408ce9d80ab96b703ee14687cc914a0ad479ebdb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724963538528655210,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b6c02d6-7a39-4fea-80b4-4ba02904232c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f2452b7e2499acfc2dda22691859915ef9b81d7ffa9020ad6044fe06095263,PodSandboxId:b6a71d1267b1556242a109dff2d1c47914b3e44249c44e2e79491bfc580ab454,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963537998920202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dw2r7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6edda799-e2d6-402b-b4cd-7e54b2b89ca5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e17dc0c760cefbdf280368ae8c71df7387ad9525e6132df6f10fc3d87b5febc4,PodSandboxId:9db6339227b53906ea9bd813539fc15515997e0afaa8bffd10f0627a9c54b0d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963537921508774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-crgtj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4
8571a8-18ae-4737-a05b-4a77736aee35,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c37f561b080360811f0545e64ed61159766fde4836a414f54eb67de63ca057,PodSandboxId:bd7adf539fa384b63c54bfed8c530511359adc9cd0c7c18ae12e7f6227c93a6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724963537480521948,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f4x4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb76dc5a-016a-416c-8880-f76fc2d2a9bb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881dd3e8ca191bf84b9cc769c8e846c152f9ad7f469e4427002254e2cb7b68df,PodSandboxId:53daf6f331dc11caec7e319b2e0e1d7d90feb2ea32ead44369bdba115bda9776,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724963526065761939,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c22a6e7c8785486b291da7b93159617,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e94f96d5c169a8e81bd28b96c8f06b5aad7264b27f5af8cbff1efd443c250d2e,PodSandboxId:46d7177c94efa372b28fab29aa3b74e62e75fa40225a7124f7f226a1ef213c1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724963526016815685,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0efa7e4e455814b5713ed169940ea21d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bae575fecaf12208b623dd35e7c88b4f7b71a2552366e495103d134160e9a9,PodSandboxId:858897f5bdf7b4100e9cf511241850678dccf7b17f893ffe9ae203017fd7c2e4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724963526012860464,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ef3e1a8cc24cb25fbef1929ff100cc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195ec5617f33e9a305c0aed8715efa8f6a9dff5ccf8e7048450a4deb2876fdc0,PodSandboxId:16813450eb8a8992ef9278b776f981a88ebcec8713d93b3fc5cf2a3a5d561cf6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724963525948309522,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b09ae8907a140762fbc0f45d1cffb624,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7a88c36402647c6d49b743bc7067f62fc094cebffedd5a342d34603edb45ae,PodSandboxId:11b530f514ff60ea17a5981f4f7734ea771a8480105aec8407c552dece3f6554,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724963239022892416,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0efa7e4e455814b5713ed169940ea21d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=41749a71-655a-41da-a712-3dda95f73288 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:12 no-preload-397724 crio[709]: time="2024-08-29 20:46:12.000622038Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=36383546-b50f-40e8-a549-5b3065d41fa0 name=/runtime.v1.RuntimeService/Version
	Aug 29 20:46:12 no-preload-397724 crio[709]: time="2024-08-29 20:46:12.000730867Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=36383546-b50f-40e8-a549-5b3065d41fa0 name=/runtime.v1.RuntimeService/Version
	Aug 29 20:46:12 no-preload-397724 crio[709]: time="2024-08-29 20:46:12.002279537Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cfceef7d-f5f7-4232-a336-e30655b927f9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:46:12 no-preload-397724 crio[709]: time="2024-08-29 20:46:12.002715066Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964372002691805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cfceef7d-f5f7-4232-a336-e30655b927f9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:46:12 no-preload-397724 crio[709]: time="2024-08-29 20:46:12.003540514Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=08d5724a-97fc-47fc-8b10-f251fb6d5761 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:12 no-preload-397724 crio[709]: time="2024-08-29 20:46:12.003641413Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=08d5724a-97fc-47fc-8b10-f251fb6d5761 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:12 no-preload-397724 crio[709]: time="2024-08-29 20:46:12.003910260Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c24672f1f33fab0a363ffe9b15191c033aad911504fe4e0cbbb0c54723fc61d,PodSandboxId:c0f16c27a76d494a7575865408ce9d80ab96b703ee14687cc914a0ad479ebdb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724963538528655210,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b6c02d6-7a39-4fea-80b4-4ba02904232c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f2452b7e2499acfc2dda22691859915ef9b81d7ffa9020ad6044fe06095263,PodSandboxId:b6a71d1267b1556242a109dff2d1c47914b3e44249c44e2e79491bfc580ab454,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963537998920202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dw2r7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6edda799-e2d6-402b-b4cd-7e54b2b89ca5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e17dc0c760cefbdf280368ae8c71df7387ad9525e6132df6f10fc3d87b5febc4,PodSandboxId:9db6339227b53906ea9bd813539fc15515997e0afaa8bffd10f0627a9c54b0d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963537921508774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-crgtj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4
8571a8-18ae-4737-a05b-4a77736aee35,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c37f561b080360811f0545e64ed61159766fde4836a414f54eb67de63ca057,PodSandboxId:bd7adf539fa384b63c54bfed8c530511359adc9cd0c7c18ae12e7f6227c93a6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724963537480521948,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f4x4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb76dc5a-016a-416c-8880-f76fc2d2a9bb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881dd3e8ca191bf84b9cc769c8e846c152f9ad7f469e4427002254e2cb7b68df,PodSandboxId:53daf6f331dc11caec7e319b2e0e1d7d90feb2ea32ead44369bdba115bda9776,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724963526065761939,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c22a6e7c8785486b291da7b93159617,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e94f96d5c169a8e81bd28b96c8f06b5aad7264b27f5af8cbff1efd443c250d2e,PodSandboxId:46d7177c94efa372b28fab29aa3b74e62e75fa40225a7124f7f226a1ef213c1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724963526016815685,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0efa7e4e455814b5713ed169940ea21d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bae575fecaf12208b623dd35e7c88b4f7b71a2552366e495103d134160e9a9,PodSandboxId:858897f5bdf7b4100e9cf511241850678dccf7b17f893ffe9ae203017fd7c2e4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724963526012860464,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ef3e1a8cc24cb25fbef1929ff100cc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195ec5617f33e9a305c0aed8715efa8f6a9dff5ccf8e7048450a4deb2876fdc0,PodSandboxId:16813450eb8a8992ef9278b776f981a88ebcec8713d93b3fc5cf2a3a5d561cf6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724963525948309522,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b09ae8907a140762fbc0f45d1cffb624,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7a88c36402647c6d49b743bc7067f62fc094cebffedd5a342d34603edb45ae,PodSandboxId:11b530f514ff60ea17a5981f4f7734ea771a8480105aec8407c552dece3f6554,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724963239022892416,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0efa7e4e455814b5713ed169940ea21d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=08d5724a-97fc-47fc-8b10-f251fb6d5761 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:12 no-preload-397724 crio[709]: time="2024-08-29 20:46:12.043649133Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a3b8cc93-364a-4787-9222-52f9a93db7ec name=/runtime.v1.RuntimeService/Version
	Aug 29 20:46:12 no-preload-397724 crio[709]: time="2024-08-29 20:46:12.043779646Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a3b8cc93-364a-4787-9222-52f9a93db7ec name=/runtime.v1.RuntimeService/Version
	Aug 29 20:46:12 no-preload-397724 crio[709]: time="2024-08-29 20:46:12.044979382Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f6b29e4e-cb6f-42f7-9247-b27b487997b5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:46:12 no-preload-397724 crio[709]: time="2024-08-29 20:46:12.045336435Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964372045313929,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6b29e4e-cb6f-42f7-9247-b27b487997b5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:46:12 no-preload-397724 crio[709]: time="2024-08-29 20:46:12.046046721Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=09cf39c0-5410-4e41-96db-96854dbe7ad6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:12 no-preload-397724 crio[709]: time="2024-08-29 20:46:12.046115755Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=09cf39c0-5410-4e41-96db-96854dbe7ad6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:12 no-preload-397724 crio[709]: time="2024-08-29 20:46:12.046310429Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c24672f1f33fab0a363ffe9b15191c033aad911504fe4e0cbbb0c54723fc61d,PodSandboxId:c0f16c27a76d494a7575865408ce9d80ab96b703ee14687cc914a0ad479ebdb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724963538528655210,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b6c02d6-7a39-4fea-80b4-4ba02904232c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f2452b7e2499acfc2dda22691859915ef9b81d7ffa9020ad6044fe06095263,PodSandboxId:b6a71d1267b1556242a109dff2d1c47914b3e44249c44e2e79491bfc580ab454,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963537998920202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dw2r7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6edda799-e2d6-402b-b4cd-7e54b2b89ca5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e17dc0c760cefbdf280368ae8c71df7387ad9525e6132df6f10fc3d87b5febc4,PodSandboxId:9db6339227b53906ea9bd813539fc15515997e0afaa8bffd10f0627a9c54b0d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963537921508774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-crgtj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4
8571a8-18ae-4737-a05b-4a77736aee35,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c37f561b080360811f0545e64ed61159766fde4836a414f54eb67de63ca057,PodSandboxId:bd7adf539fa384b63c54bfed8c530511359adc9cd0c7c18ae12e7f6227c93a6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724963537480521948,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f4x4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb76dc5a-016a-416c-8880-f76fc2d2a9bb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881dd3e8ca191bf84b9cc769c8e846c152f9ad7f469e4427002254e2cb7b68df,PodSandboxId:53daf6f331dc11caec7e319b2e0e1d7d90feb2ea32ead44369bdba115bda9776,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724963526065761939,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c22a6e7c8785486b291da7b93159617,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e94f96d5c169a8e81bd28b96c8f06b5aad7264b27f5af8cbff1efd443c250d2e,PodSandboxId:46d7177c94efa372b28fab29aa3b74e62e75fa40225a7124f7f226a1ef213c1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724963526016815685,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0efa7e4e455814b5713ed169940ea21d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bae575fecaf12208b623dd35e7c88b4f7b71a2552366e495103d134160e9a9,PodSandboxId:858897f5bdf7b4100e9cf511241850678dccf7b17f893ffe9ae203017fd7c2e4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724963526012860464,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ef3e1a8cc24cb25fbef1929ff100cc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195ec5617f33e9a305c0aed8715efa8f6a9dff5ccf8e7048450a4deb2876fdc0,PodSandboxId:16813450eb8a8992ef9278b776f981a88ebcec8713d93b3fc5cf2a3a5d561cf6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724963525948309522,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b09ae8907a140762fbc0f45d1cffb624,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7a88c36402647c6d49b743bc7067f62fc094cebffedd5a342d34603edb45ae,PodSandboxId:11b530f514ff60ea17a5981f4f7734ea771a8480105aec8407c552dece3f6554,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724963239022892416,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0efa7e4e455814b5713ed169940ea21d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=09cf39c0-5410-4e41-96db-96854dbe7ad6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:12 no-preload-397724 crio[709]: time="2024-08-29 20:46:12.084928576Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=56eb4058-ee00-424a-898d-e10018d640ef name=/runtime.v1.RuntimeService/Version
	Aug 29 20:46:12 no-preload-397724 crio[709]: time="2024-08-29 20:46:12.085076254Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=56eb4058-ee00-424a-898d-e10018d640ef name=/runtime.v1.RuntimeService/Version
	Aug 29 20:46:12 no-preload-397724 crio[709]: time="2024-08-29 20:46:12.086519114Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fd8ab5bb-b756-4f51-bfe9-ebb9e23e6990 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:46:12 no-preload-397724 crio[709]: time="2024-08-29 20:46:12.086855913Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964372086835468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd8ab5bb-b756-4f51-bfe9-ebb9e23e6990 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:46:12 no-preload-397724 crio[709]: time="2024-08-29 20:46:12.087378210Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f301fd1-8a52-4755-834d-ad9eb0d71812 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:12 no-preload-397724 crio[709]: time="2024-08-29 20:46:12.087440040Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f301fd1-8a52-4755-834d-ad9eb0d71812 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:12 no-preload-397724 crio[709]: time="2024-08-29 20:46:12.087681336Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c24672f1f33fab0a363ffe9b15191c033aad911504fe4e0cbbb0c54723fc61d,PodSandboxId:c0f16c27a76d494a7575865408ce9d80ab96b703ee14687cc914a0ad479ebdb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724963538528655210,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b6c02d6-7a39-4fea-80b4-4ba02904232c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f2452b7e2499acfc2dda22691859915ef9b81d7ffa9020ad6044fe06095263,PodSandboxId:b6a71d1267b1556242a109dff2d1c47914b3e44249c44e2e79491bfc580ab454,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963537998920202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dw2r7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6edda799-e2d6-402b-b4cd-7e54b2b89ca5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e17dc0c760cefbdf280368ae8c71df7387ad9525e6132df6f10fc3d87b5febc4,PodSandboxId:9db6339227b53906ea9bd813539fc15515997e0afaa8bffd10f0627a9c54b0d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724963537921508774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-crgtj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4
8571a8-18ae-4737-a05b-4a77736aee35,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c37f561b080360811f0545e64ed61159766fde4836a414f54eb67de63ca057,PodSandboxId:bd7adf539fa384b63c54bfed8c530511359adc9cd0c7c18ae12e7f6227c93a6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724963537480521948,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f4x4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb76dc5a-016a-416c-8880-f76fc2d2a9bb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881dd3e8ca191bf84b9cc769c8e846c152f9ad7f469e4427002254e2cb7b68df,PodSandboxId:53daf6f331dc11caec7e319b2e0e1d7d90feb2ea32ead44369bdba115bda9776,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724963526065761939,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c22a6e7c8785486b291da7b93159617,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e94f96d5c169a8e81bd28b96c8f06b5aad7264b27f5af8cbff1efd443c250d2e,PodSandboxId:46d7177c94efa372b28fab29aa3b74e62e75fa40225a7124f7f226a1ef213c1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724963526016815685,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0efa7e4e455814b5713ed169940ea21d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bae575fecaf12208b623dd35e7c88b4f7b71a2552366e495103d134160e9a9,PodSandboxId:858897f5bdf7b4100e9cf511241850678dccf7b17f893ffe9ae203017fd7c2e4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724963526012860464,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ef3e1a8cc24cb25fbef1929ff100cc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195ec5617f33e9a305c0aed8715efa8f6a9dff5ccf8e7048450a4deb2876fdc0,PodSandboxId:16813450eb8a8992ef9278b776f981a88ebcec8713d93b3fc5cf2a3a5d561cf6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724963525948309522,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b09ae8907a140762fbc0f45d1cffb624,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7a88c36402647c6d49b743bc7067f62fc094cebffedd5a342d34603edb45ae,PodSandboxId:11b530f514ff60ea17a5981f4f7734ea771a8480105aec8407c552dece3f6554,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724963239022892416,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-397724,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0efa7e4e455814b5713ed169940ea21d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7f301fd1-8a52-4755-834d-ad9eb0d71812 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8c24672f1f33f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   c0f16c27a76d4       storage-provisioner
	e6f2452b7e249       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   b6a71d1267b15       coredns-6f6b679f8f-dw2r7
	e17dc0c760cef       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   9db6339227b53       coredns-6f6b679f8f-crgtj
	d9c37f561b080       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   13 minutes ago      Running             kube-proxy                0                   bd7adf539fa38       kube-proxy-f4x4j
	881dd3e8ca191       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   14 minutes ago      Running             etcd                      2                   53daf6f331dc1       etcd-no-preload-397724
	e94f96d5c169a       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Running             kube-apiserver            2                   46d7177c94efa       kube-apiserver-no-preload-397724
	c8bae575fecaf       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   14 minutes ago      Running             kube-controller-manager   2                   858897f5bdf7b       kube-controller-manager-no-preload-397724
	195ec5617f33e       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   14 minutes ago      Running             kube-scheduler            2                   16813450eb8a8       kube-scheduler-no-preload-397724
	3a7a88c364026       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   18 minutes ago      Exited              kube-apiserver            1                   11b530f514ff6       kube-apiserver-no-preload-397724
	
	
	==> coredns [e17dc0c760cefbdf280368ae8c71df7387ad9525e6132df6f10fc3d87b5febc4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [e6f2452b7e2499acfc2dda22691859915ef9b81d7ffa9020ad6044fe06095263] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-397724
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-397724
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=no-preload-397724
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T20_32_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 20:32:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-397724
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 20:46:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 20:42:34 +0000   Thu, 29 Aug 2024 20:32:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 20:42:34 +0000   Thu, 29 Aug 2024 20:32:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 20:42:34 +0000   Thu, 29 Aug 2024 20:32:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 20:42:34 +0000   Thu, 29 Aug 2024 20:32:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.214
	  Hostname:    no-preload-397724
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2d47525907d449e0bf8771dfb0d73935
	  System UUID:                2d475259-07d4-49e0-bf87-71dfb0d73935
	  Boot ID:                    f666bf1d-77bd-4dc3-9631-492300f9bc26
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-crgtj                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-6f6b679f8f-dw2r7                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-no-preload-397724                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kube-apiserver-no-preload-397724             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-no-preload-397724    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-f4x4j                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-no-preload-397724             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-6867b74b74-nxdc5              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         13m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node no-preload-397724 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node no-preload-397724 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node no-preload-397724 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node no-preload-397724 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node no-preload-397724 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node no-preload-397724 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-397724 event: Registered Node no-preload-397724 in Controller
	
	
	==> dmesg <==
	[  +0.054526] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042457] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.120725] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.707964] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.652959] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.838431] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.061804] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056790] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.168554] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.137221] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[Aug29 20:27] systemd-fstab-generator[700]: Ignoring "noauto" option for root device
	[ +15.599465] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +0.059865] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.465393] systemd-fstab-generator[1418]: Ignoring "noauto" option for root device
	[  +4.713121] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.732141] kauditd_printk_skb: 86 callbacks suppressed
	[Aug29 20:32] systemd-fstab-generator[3083]: Ignoring "noauto" option for root device
	[  +0.064825] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.486867] systemd-fstab-generator[3401]: Ignoring "noauto" option for root device
	[  +0.094537] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.314793] systemd-fstab-generator[3533]: Ignoring "noauto" option for root device
	[  +0.110003] kauditd_printk_skb: 12 callbacks suppressed
	[Aug29 20:33] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [881dd3e8ca191bf84b9cc769c8e846c152f9ad7f469e4427002254e2cb7b68df] <==
	{"level":"info","ts":"2024-08-29T20:32:06.562013Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"3dc9612c0afb3334","initial-advertise-peer-urls":["https://192.168.50.214:2380"],"listen-peer-urls":["https://192.168.50.214:2380"],"advertise-client-urls":["https://192.168.50.214:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.214:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-29T20:32:06.562085Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-29T20:32:07.062106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dc9612c0afb3334 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-29T20:32:07.062266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dc9612c0afb3334 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-29T20:32:07.062303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dc9612c0afb3334 received MsgPreVoteResp from 3dc9612c0afb3334 at term 1"}
	{"level":"info","ts":"2024-08-29T20:32:07.062343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dc9612c0afb3334 became candidate at term 2"}
	{"level":"info","ts":"2024-08-29T20:32:07.062367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dc9612c0afb3334 received MsgVoteResp from 3dc9612c0afb3334 at term 2"}
	{"level":"info","ts":"2024-08-29T20:32:07.062442Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dc9612c0afb3334 became leader at term 2"}
	{"level":"info","ts":"2024-08-29T20:32:07.062492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3dc9612c0afb3334 elected leader 3dc9612c0afb3334 at term 2"}
	{"level":"info","ts":"2024-08-29T20:32:07.069304Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"3dc9612c0afb3334","local-member-attributes":"{Name:no-preload-397724 ClientURLs:[https://192.168.50.214:2379]}","request-path":"/0/members/3dc9612c0afb3334/attributes","cluster-id":"6c00e6cf347ec681","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-29T20:32:07.069589Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T20:32:07.070078Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T20:32:07.070851Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T20:32:07.073871Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-29T20:32:07.074021Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-29T20:32:07.077071Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-29T20:32:07.077595Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T20:32:07.071097Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T20:32:07.084642Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.214:2379"}
	{"level":"info","ts":"2024-08-29T20:32:07.087055Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6c00e6cf347ec681","local-member-id":"3dc9612c0afb3334","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T20:32:07.087155Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T20:32:07.087207Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T20:42:07.219187Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":718}
	{"level":"info","ts":"2024-08-29T20:42:07.229145Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":718,"took":"9.141968ms","hash":1884830167,"current-db-size-bytes":2154496,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2154496,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-08-29T20:42:07.229279Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1884830167,"revision":718,"compact-revision":-1}
	
	
	==> kernel <==
	 20:46:12 up 19 min,  0 users,  load average: 0.42, 0.18, 0.12
	Linux no-preload-397724 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3a7a88c36402647c6d49b743bc7067f62fc094cebffedd5a342d34603edb45ae] <==
	W0829 20:31:58.995673       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.006575       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.023061       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.098656       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.133120       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.194039       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.212638       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.256050       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.281570       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.342157       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.345647       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.496642       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.521302       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.580310       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.587199       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.695403       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.696690       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:31:59.767406       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:32:00.011051       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:32:00.021672       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:32:00.283029       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:32:00.308398       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:32:02.763200       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:32:03.130892       1 logging.go:55] [core] [Channel #13 SubChannel #16]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 20:32:03.398040       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e94f96d5c169a8e81bd28b96c8f06b5aad7264b27f5af8cbff1efd443c250d2e] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0829 20:42:09.881125       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 20:42:09.881185       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0829 20:42:09.882251       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 20:42:09.882809       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0829 20:43:09.883375       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 20:43:09.883483       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0829 20:43:09.883389       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 20:43:09.883564       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0829 20:43:09.884832       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 20:43:09.884860       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0829 20:45:09.885636       1 handler_proxy.go:99] no RequestInfo found in the context
	W0829 20:45:09.886084       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 20:45:09.886242       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0829 20:45:09.886365       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0829 20:45:09.887489       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 20:45:09.887582       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c8bae575fecaf12208b623dd35e7c88b4f7b71a2552366e495103d134160e9a9] <==
	E0829 20:40:45.831620       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:40:46.374867       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:41:15.838245       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:41:16.385013       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:41:45.844825       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:41:46.394586       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:42:15.851146       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:42:16.407138       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0829 20:42:34.250025       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-397724"
	E0829 20:42:45.857797       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:42:46.416127       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:43:15.865098       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:43:16.425363       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0829 20:43:26.522399       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="202.914µs"
	I0829 20:43:37.523266       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="343.459µs"
	E0829 20:43:45.872583       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:43:46.434335       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:44:15.879494       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:44:16.443915       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:44:45.886811       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:44:46.452794       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:45:15.892932       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:45:16.460590       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 20:45:45.899647       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 20:45:46.469821       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [d9c37f561b080360811f0545e64ed61159766fde4836a414f54eb67de63ca057] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 20:32:18.171233       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 20:32:18.311653       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.214"]
	E0829 20:32:18.311788       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 20:32:18.497065       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 20:32:18.497195       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 20:32:18.497244       1 server_linux.go:169] "Using iptables Proxier"
	I0829 20:32:18.511174       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 20:32:18.511395       1 server.go:483] "Version info" version="v1.31.0"
	I0829 20:32:18.511405       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 20:32:18.513244       1 config.go:197] "Starting service config controller"
	I0829 20:32:18.513274       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 20:32:18.513292       1 config.go:104] "Starting endpoint slice config controller"
	I0829 20:32:18.513296       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 20:32:18.527349       1 config.go:326] "Starting node config controller"
	I0829 20:32:18.527361       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 20:32:18.615385       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 20:32:18.615425       1 shared_informer.go:320] Caches are synced for service config
	I0829 20:32:18.629252       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [195ec5617f33e9a305c0aed8715efa8f6a9dff5ccf8e7048450a4deb2876fdc0] <==
	W0829 20:32:08.895194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 20:32:08.895584       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 20:32:08.895208       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 20:32:08.895650       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 20:32:08.895779       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0829 20:32:08.895900       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 20:32:08.896104       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0829 20:32:08.896135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 20:32:08.896315       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 20:32:08.896344       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 20:32:09.719664       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0829 20:32:09.719726       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 20:32:09.808753       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0829 20:32:09.808802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 20:32:09.957135       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 20:32:09.957185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 20:32:09.974709       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0829 20:32:09.974774       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 20:32:10.001155       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0829 20:32:10.001203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 20:32:10.005297       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 20:32:10.005384       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 20:32:10.173290       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0829 20:32:10.173441       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0829 20:32:12.773291       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 20:45:11 no-preload-397724 kubelet[3408]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 20:45:11 no-preload-397724 kubelet[3408]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 20:45:11 no-preload-397724 kubelet[3408]: E0829 20:45:11.756318    3408 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964311755768442,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:45:11 no-preload-397724 kubelet[3408]: E0829 20:45:11.756363    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964311755768442,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:45:21 no-preload-397724 kubelet[3408]: E0829 20:45:21.759988    3408 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964321759197923,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:45:21 no-preload-397724 kubelet[3408]: E0829 20:45:21.760044    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964321759197923,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:45:22 no-preload-397724 kubelet[3408]: E0829 20:45:22.505892    3408 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-nxdc5" podUID="6061e81d-2f14-4c4a-9e0f-acb57dc9fb5a"
	Aug 29 20:45:31 no-preload-397724 kubelet[3408]: E0829 20:45:31.762705    3408 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964331762110928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:45:31 no-preload-397724 kubelet[3408]: E0829 20:45:31.762789    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964331762110928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:45:36 no-preload-397724 kubelet[3408]: E0829 20:45:36.505353    3408 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-nxdc5" podUID="6061e81d-2f14-4c4a-9e0f-acb57dc9fb5a"
	Aug 29 20:45:41 no-preload-397724 kubelet[3408]: E0829 20:45:41.765453    3408 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964341765017190,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:45:41 no-preload-397724 kubelet[3408]: E0829 20:45:41.765482    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964341765017190,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:45:47 no-preload-397724 kubelet[3408]: E0829 20:45:47.505105    3408 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-nxdc5" podUID="6061e81d-2f14-4c4a-9e0f-acb57dc9fb5a"
	Aug 29 20:45:51 no-preload-397724 kubelet[3408]: E0829 20:45:51.767243    3408 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964351766808172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:45:51 no-preload-397724 kubelet[3408]: E0829 20:45:51.767274    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964351766808172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:46:01 no-preload-397724 kubelet[3408]: E0829 20:46:01.505575    3408 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-nxdc5" podUID="6061e81d-2f14-4c4a-9e0f-acb57dc9fb5a"
	Aug 29 20:46:01 no-preload-397724 kubelet[3408]: E0829 20:46:01.769674    3408 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964361769195784,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:46:01 no-preload-397724 kubelet[3408]: E0829 20:46:01.769888    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964361769195784,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:46:11 no-preload-397724 kubelet[3408]: E0829 20:46:11.538510    3408 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 20:46:11 no-preload-397724 kubelet[3408]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 20:46:11 no-preload-397724 kubelet[3408]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 20:46:11 no-preload-397724 kubelet[3408]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 20:46:11 no-preload-397724 kubelet[3408]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 20:46:11 no-preload-397724 kubelet[3408]: E0829 20:46:11.772530    3408 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964371772107924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 20:46:11 no-preload-397724 kubelet[3408]: E0829 20:46:11.772566    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964371772107924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [8c24672f1f33fab0a363ffe9b15191c033aad911504fe4e0cbbb0c54723fc61d] <==
	I0829 20:32:18.658603       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 20:32:18.689186       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 20:32:18.689581       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 20:32:18.717028       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 20:32:18.717478       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-397724_b7275b61-9d3a-4e91-9372-220d5ce9c8ee!
	I0829 20:32:18.717707       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"866499d7-741c-49ff-95ed-e7e5f962ef68", APIVersion:"v1", ResourceVersion:"440", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-397724_b7275b61-9d3a-4e91-9372-220d5ce9c8ee became leader
	I0829 20:32:18.821054       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-397724_b7275b61-9d3a-4e91-9372-220d5ce9c8ee!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-397724 -n no-preload-397724
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-397724 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-nxdc5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-397724 describe pod metrics-server-6867b74b74-nxdc5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-397724 describe pod metrics-server-6867b74b74-nxdc5: exit status 1 (73.806999ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-nxdc5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-397724 describe pod metrics-server-6867b74b74-nxdc5: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (287.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (157.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
E0829 20:43:45.975015   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
E0829 20:44:41.016064   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.116:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-032002 -n old-k8s-version-032002
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-032002 -n old-k8s-version-032002: exit status 2 (227.61682ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-032002" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-032002 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-032002 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.804µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-032002 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-032002 -n old-k8s-version-032002
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-032002 -n old-k8s-version-032002: exit status 2 (215.670072ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-032002 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-032002 logs -n 25: (1.631728361s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-397724                                   | no-preload-397724            | jenkins | v1.33.1 | 29 Aug 24 20:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-388383            | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:18 UTC | 29 Aug 24 20:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-388383                                  | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:18 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-695305             | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:19 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-695305                  | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-695305 --memory=2200 --alsologtostderr   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:19 UTC | 29 Aug 24 20:20 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-695305 image list                           | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	| delete  | -p newest-cni-695305                                   | newest-cni-695305            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:20 UTC |
	| start   | -p                                                     | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC | 29 Aug 24 20:21 UTC |
	|         | default-k8s-diff-port-145096                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-032002        | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-397724                  | no-preload-397724            | jenkins | v1.33.1 | 29 Aug 24 20:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-397724                                   | no-preload-397724            | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC | 29 Aug 24 20:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-388383                 | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-388383                                  | embed-certs-388383           | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC | 29 Aug 24 20:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-145096  | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC | 29 Aug 24 20:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:21 UTC |                     |
	|         | default-k8s-diff-port-145096                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-032002                              | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:22 UTC | 29 Aug 24 20:22 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-032002             | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:22 UTC | 29 Aug 24 20:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-032002                              | old-k8s-version-032002       | jenkins | v1.33.1 | 29 Aug 24 20:22 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-145096       | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-145096 | jenkins | v1.33.1 | 29 Aug 24 20:24 UTC | 29 Aug 24 20:31 UTC |
	|         | default-k8s-diff-port-145096                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 20:24:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 20:24:16.618808   68084 out.go:345] Setting OutFile to fd 1 ...
	I0829 20:24:16.619043   68084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:24:16.619051   68084 out.go:358] Setting ErrFile to fd 2...
	I0829 20:24:16.619055   68084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:24:16.619206   68084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 20:24:16.619741   68084 out.go:352] Setting JSON to false
	I0829 20:24:16.620649   68084 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7604,"bootTime":1724955453,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 20:24:16.620702   68084 start.go:139] virtualization: kvm guest
	I0829 20:24:16.622891   68084 out.go:177] * [default-k8s-diff-port-145096] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 20:24:16.624228   68084 out.go:177]   - MINIKUBE_LOCATION=19530
	I0829 20:24:16.624256   68084 notify.go:220] Checking for updates...
	I0829 20:24:16.627123   68084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 20:24:16.628611   68084 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:24:16.629858   68084 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 20:24:16.631013   68084 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 20:24:16.632116   68084 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 20:24:16.633630   68084 config.go:182] Loaded profile config "default-k8s-diff-port-145096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:24:16.634042   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:24:16.634080   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:24:16.648879   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36381
	I0829 20:24:16.649315   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:24:16.649875   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:24:16.649893   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:24:16.650274   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:24:16.650504   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:24:16.650776   68084 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 20:24:16.651053   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:24:16.651111   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:24:16.665964   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33615
	I0829 20:24:16.666402   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:24:16.666918   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:24:16.666937   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:24:16.667250   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:24:16.667435   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:24:16.698712   68084 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 20:24:16.700010   68084 start.go:297] selected driver: kvm2
	I0829 20:24:16.700023   68084 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-145096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:24:16.700131   68084 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 20:24:16.700915   68084 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 20:24:16.700998   68084 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19530-11185/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 20:24:16.715940   68084 install.go:137] /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0829 20:24:16.716321   68084 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:24:16.716388   68084 cni.go:84] Creating CNI manager for ""
	I0829 20:24:16.716405   68084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:24:16.716452   68084 start.go:340] cluster config:
	{Name:default-k8s-diff-port-145096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:24:16.716563   68084 iso.go:125] acquiring lock: {Name:mk1c9d3ac7f423dd4657884e37bdf4359f6328d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 20:24:16.718175   68084 out.go:177] * Starting "default-k8s-diff-port-145096" primary control-plane node in "default-k8s-diff-port-145096" cluster
	I0829 20:24:16.258820   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:16.719204   68084 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:24:16.719231   68084 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 20:24:16.719237   68084 cache.go:56] Caching tarball of preloaded images
	I0829 20:24:16.719296   68084 preload.go:172] Found /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 20:24:16.719305   68084 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 20:24:16.719385   68084 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/config.json ...
	I0829 20:24:16.719549   68084 start.go:360] acquireMachinesLock for default-k8s-diff-port-145096: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 20:24:22.338805   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:25.410778   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:31.490844   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:34.562885   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:40.642793   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:43.714939   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:49.794765   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:52.866858   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:24:58.946771   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:02.018832   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:08.098829   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:11.170833   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:17.250794   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:20.322926   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:26.402827   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:29.474844   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:35.554771   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:38.626850   66841 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.214:22: connect: no route to host
	I0829 20:25:41.630257   66989 start.go:364] duration metric: took 4m26.950412835s to acquireMachinesLock for "embed-certs-388383"
	I0829 20:25:41.630308   66989 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:25:41.630316   66989 fix.go:54] fixHost starting: 
	I0829 20:25:41.630791   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:25:41.630828   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:25:41.646005   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32873
	I0829 20:25:41.646405   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:25:41.646932   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:25:41.646959   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:25:41.647308   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:25:41.647525   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:25:41.647686   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:25:41.649457   66989 fix.go:112] recreateIfNeeded on embed-certs-388383: state=Stopped err=<nil>
	I0829 20:25:41.649491   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	W0829 20:25:41.649639   66989 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:25:41.651109   66989 out.go:177] * Restarting existing kvm2 VM for "embed-certs-388383" ...
	I0829 20:25:41.627651   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:25:41.627705   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:25:41.628067   66841 buildroot.go:166] provisioning hostname "no-preload-397724"
	I0829 20:25:41.628089   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:25:41.628259   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:25:41.630106   66841 machine.go:96] duration metric: took 4m35.46951337s to provisionDockerMachine
	I0829 20:25:41.630148   66841 fix.go:56] duration metric: took 4m35.494271139s for fixHost
	I0829 20:25:41.630159   66841 start.go:83] releasing machines lock for "no-preload-397724", held for 4m35.494325078s
	W0829 20:25:41.630182   66841 start.go:714] error starting host: provision: host is not running
	W0829 20:25:41.630284   66841 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0829 20:25:41.630295   66841 start.go:729] Will try again in 5 seconds ...
	I0829 20:25:41.652159   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Start
	I0829 20:25:41.652318   66989 main.go:141] libmachine: (embed-certs-388383) Ensuring networks are active...
	I0829 20:25:41.653011   66989 main.go:141] libmachine: (embed-certs-388383) Ensuring network default is active
	I0829 20:25:41.653426   66989 main.go:141] libmachine: (embed-certs-388383) Ensuring network mk-embed-certs-388383 is active
	I0829 20:25:41.653824   66989 main.go:141] libmachine: (embed-certs-388383) Getting domain xml...
	I0829 20:25:41.654765   66989 main.go:141] libmachine: (embed-certs-388383) Creating domain...
	I0829 20:25:42.860512   66989 main.go:141] libmachine: (embed-certs-388383) Waiting to get IP...
	I0829 20:25:42.861297   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:42.861661   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:42.861739   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:42.861649   68412 retry.go:31] will retry after 207.172422ms: waiting for machine to come up
	I0829 20:25:43.070026   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:43.070414   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:43.070445   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:43.070368   68412 retry.go:31] will retry after 336.815982ms: waiting for machine to come up
	I0829 20:25:43.408817   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:43.409144   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:43.409182   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:43.409117   68412 retry.go:31] will retry after 330.159156ms: waiting for machine to come up
	I0829 20:25:43.740518   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:43.741039   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:43.741065   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:43.741002   68412 retry.go:31] will retry after 528.906592ms: waiting for machine to come up
	I0829 20:25:44.271695   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:44.272286   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:44.272344   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:44.272280   68412 retry.go:31] will retry after 616.92568ms: waiting for machine to come up
	I0829 20:25:46.631383   66841 start.go:360] acquireMachinesLock for no-preload-397724: {Name:mk7bbdf93fa2a8e913f7adca47ba2f24c1fa817a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 20:25:44.891133   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:44.891535   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:44.891566   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:44.891499   68412 retry.go:31] will retry after 907.330558ms: waiting for machine to come up
	I0829 20:25:45.800480   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:45.800858   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:45.800885   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:45.800840   68412 retry.go:31] will retry after 1.189775318s: waiting for machine to come up
	I0829 20:25:46.992687   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:46.993155   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:46.993189   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:46.993142   68412 retry.go:31] will retry after 1.467244635s: waiting for machine to come up
	I0829 20:25:48.462770   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:48.463201   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:48.463226   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:48.463173   68412 retry.go:31] will retry after 1.602764839s: waiting for machine to come up
	I0829 20:25:50.067082   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:50.067608   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:50.067638   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:50.067543   68412 retry.go:31] will retry after 1.562244323s: waiting for machine to come up
	I0829 20:25:51.632201   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:51.632705   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:51.632731   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:51.632650   68412 retry.go:31] will retry after 1.747220365s: waiting for machine to come up
	I0829 20:25:53.382010   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:53.382463   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:53.382527   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:53.382454   68412 retry.go:31] will retry after 3.446054845s: waiting for machine to come up
	I0829 20:25:56.830511   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:25:56.830954   66989 main.go:141] libmachine: (embed-certs-388383) DBG | unable to find current IP address of domain embed-certs-388383 in network mk-embed-certs-388383
	I0829 20:25:56.830988   66989 main.go:141] libmachine: (embed-certs-388383) DBG | I0829 20:25:56.830908   68412 retry.go:31] will retry after 4.53995219s: waiting for machine to come up
	I0829 20:26:02.603329   67607 start.go:364] duration metric: took 3m23.680319578s to acquireMachinesLock for "old-k8s-version-032002"
	I0829 20:26:02.603393   67607 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:26:02.603404   67607 fix.go:54] fixHost starting: 
	I0829 20:26:02.603837   67607 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:02.603884   67607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:02.621398   67607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35977
	I0829 20:26:02.621840   67607 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:02.622425   67607 main.go:141] libmachine: Using API Version  1
	I0829 20:26:02.622460   67607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:02.622810   67607 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:02.623040   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:02.623201   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetState
	I0829 20:26:02.624854   67607 fix.go:112] recreateIfNeeded on old-k8s-version-032002: state=Stopped err=<nil>
	I0829 20:26:02.624880   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	W0829 20:26:02.625020   67607 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:26:02.627161   67607 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-032002" ...
	I0829 20:26:02.628419   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .Start
	I0829 20:26:02.628578   67607 main.go:141] libmachine: (old-k8s-version-032002) Ensuring networks are active...
	I0829 20:26:02.629339   67607 main.go:141] libmachine: (old-k8s-version-032002) Ensuring network default is active
	I0829 20:26:02.629732   67607 main.go:141] libmachine: (old-k8s-version-032002) Ensuring network mk-old-k8s-version-032002 is active
	I0829 20:26:02.630188   67607 main.go:141] libmachine: (old-k8s-version-032002) Getting domain xml...
	I0829 20:26:02.630924   67607 main.go:141] libmachine: (old-k8s-version-032002) Creating domain...
	I0829 20:26:01.375542   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.375928   66989 main.go:141] libmachine: (embed-certs-388383) Found IP for machine: 192.168.61.202
	I0829 20:26:01.375951   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has current primary IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.375974   66989 main.go:141] libmachine: (embed-certs-388383) Reserving static IP address...
	I0829 20:26:01.376364   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "embed-certs-388383", mac: "52:54:00:6c:5a:0c", ip: "192.168.61.202"} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.376398   66989 main.go:141] libmachine: (embed-certs-388383) DBG | skip adding static IP to network mk-embed-certs-388383 - found existing host DHCP lease matching {name: "embed-certs-388383", mac: "52:54:00:6c:5a:0c", ip: "192.168.61.202"}
	I0829 20:26:01.376411   66989 main.go:141] libmachine: (embed-certs-388383) Reserved static IP address: 192.168.61.202
	I0829 20:26:01.376428   66989 main.go:141] libmachine: (embed-certs-388383) Waiting for SSH to be available...
	I0829 20:26:01.376445   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Getting to WaitForSSH function...
	I0829 20:26:01.378600   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.378899   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.378937   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.379065   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Using SSH client type: external
	I0829 20:26:01.379088   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa (-rw-------)
	I0829 20:26:01.379118   66989 main.go:141] libmachine: (embed-certs-388383) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:01.379132   66989 main.go:141] libmachine: (embed-certs-388383) DBG | About to run SSH command:
	I0829 20:26:01.379141   66989 main.go:141] libmachine: (embed-certs-388383) DBG | exit 0
	I0829 20:26:01.498736   66989 main.go:141] libmachine: (embed-certs-388383) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:01.499103   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetConfigRaw
	I0829 20:26:01.499700   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetIP
	I0829 20:26:01.502022   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.502332   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.502362   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.502586   66989 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/config.json ...
	I0829 20:26:01.502778   66989 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:01.502795   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:01.502980   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.505156   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.505452   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.505473   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.505590   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:01.505739   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.505902   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.506038   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:01.506183   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:01.506366   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:01.506376   66989 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:01.602691   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:01.602721   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetMachineName
	I0829 20:26:01.603002   66989 buildroot.go:166] provisioning hostname "embed-certs-388383"
	I0829 20:26:01.603033   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetMachineName
	I0829 20:26:01.603232   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.605841   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.606170   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.606201   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.606333   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:01.606505   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.606672   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.606786   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:01.606950   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:01.607121   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:01.607144   66989 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-388383 && echo "embed-certs-388383" | sudo tee /etc/hostname
	I0829 20:26:01.717669   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-388383
	
	I0829 20:26:01.717709   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.720400   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.720705   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.720733   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.720863   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:01.721097   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.721280   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.721446   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:01.721585   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:01.721811   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:01.721842   66989 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-388383' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-388383/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-388383' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:01.827800   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:01.827835   66989 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:01.827869   66989 buildroot.go:174] setting up certificates
	I0829 20:26:01.827882   66989 provision.go:84] configureAuth start
	I0829 20:26:01.827894   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetMachineName
	I0829 20:26:01.828214   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetIP
	I0829 20:26:01.830619   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.831150   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.831184   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.831339   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.833642   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.833961   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.833987   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.834161   66989 provision.go:143] copyHostCerts
	I0829 20:26:01.834217   66989 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:01.834241   66989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:01.834322   66989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:01.834445   66989 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:01.834457   66989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:01.834491   66989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:01.834608   66989 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:01.834621   66989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:01.834660   66989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:01.834726   66989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.embed-certs-388383 san=[127.0.0.1 192.168.61.202 embed-certs-388383 localhost minikube]
	I0829 20:26:01.992735   66989 provision.go:177] copyRemoteCerts
	I0829 20:26:01.992794   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:01.992819   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:01.995463   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.995835   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:01.995862   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:01.996006   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:01.996179   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:01.996333   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:01.996460   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:02.077017   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:02.105498   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0829 20:26:02.133974   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 20:26:02.161330   66989 provision.go:87] duration metric: took 333.435119ms to configureAuth
	I0829 20:26:02.161362   66989 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:02.161579   66989 config.go:182] Loaded profile config "embed-certs-388383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:26:02.161707   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.164373   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.164696   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.164724   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.164909   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.165111   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.165276   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.165402   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.165535   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:02.165697   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:02.165711   66989 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:02.377994   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:02.378022   66989 machine.go:96] duration metric: took 875.231112ms to provisionDockerMachine
	I0829 20:26:02.378037   66989 start.go:293] postStartSetup for "embed-certs-388383" (driver="kvm2")
	I0829 20:26:02.378053   66989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:02.378078   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.378404   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:02.378432   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.380920   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.381329   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.381358   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.381564   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.381797   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.381975   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.382124   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:02.461053   66989 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:02.465391   66989 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:02.465417   66989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:02.465479   66989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:02.465550   66989 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:02.465635   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:02.474909   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:02.500025   66989 start.go:296] duration metric: took 121.973853ms for postStartSetup
	I0829 20:26:02.500064   66989 fix.go:56] duration metric: took 20.86974885s for fixHost
	I0829 20:26:02.500082   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.502976   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.503380   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.503411   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.503599   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.503808   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.503976   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.504126   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.504283   66989 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:02.504459   66989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0829 20:26:02.504469   66989 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:02.603161   66989 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963162.568310162
	
	I0829 20:26:02.603181   66989 fix.go:216] guest clock: 1724963162.568310162
	I0829 20:26:02.603187   66989 fix.go:229] Guest: 2024-08-29 20:26:02.568310162 +0000 UTC Remote: 2024-08-29 20:26:02.500067292 +0000 UTC m=+288.185978445 (delta=68.24287ms)
	I0829 20:26:02.603210   66989 fix.go:200] guest clock delta is within tolerance: 68.24287ms
	I0829 20:26:02.603216   66989 start.go:83] releasing machines lock for "embed-certs-388383", held for 20.972921408s
	I0829 20:26:02.603248   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.603532   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetIP
	I0829 20:26:02.606426   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.606804   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.606834   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.607021   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.607527   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.607694   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:02.607770   66989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:02.607809   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.607878   66989 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:02.607896   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:02.610239   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.610264   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.610657   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.610685   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.610723   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:02.610742   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:02.610844   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.611014   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.611014   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:02.611145   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:02.611208   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.611268   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:02.611341   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:02.611399   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:02.712435   66989 ssh_runner.go:195] Run: systemctl --version
	I0829 20:26:02.718614   66989 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:26:02.865138   66989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:26:02.871510   66989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:26:02.871593   66989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:26:02.887316   66989 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:26:02.887340   66989 start.go:495] detecting cgroup driver to use...
	I0829 20:26:02.887394   66989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:26:02.905024   66989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:26:02.918922   66989 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:26:02.918986   66989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:26:02.932660   66989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:26:02.946679   66989 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:26:03.056273   66989 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:26:03.216885   66989 docker.go:233] disabling docker service ...
	I0829 20:26:03.216959   66989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:26:03.231363   66989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:26:03.245609   66989 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:26:03.368087   66989 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:26:03.493947   66989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:26:03.508803   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:26:03.527542   66989 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 20:26:03.527607   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.538301   66989 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:26:03.538370   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.549672   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.562203   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.573572   66989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:26:03.585031   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.596778   66989 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.619405   66989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:03.630337   66989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:26:03.640492   66989 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:26:03.640568   66989 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:26:03.657931   66989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:26:03.673756   66989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:03.792856   66989 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:26:03.880493   66989 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:26:03.880551   66989 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:26:03.885793   66989 start.go:563] Will wait 60s for crictl version
	I0829 20:26:03.885850   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:26:03.889835   66989 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:26:03.928633   66989 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:26:03.928702   66989 ssh_runner.go:195] Run: crio --version
	I0829 20:26:03.958861   66989 ssh_runner.go:195] Run: crio --version
	I0829 20:26:03.987724   66989 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 20:26:03.989009   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetIP
	I0829 20:26:03.991889   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:03.992308   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:03.992334   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:03.992567   66989 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0829 20:26:03.996945   66989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:04.009353   66989 kubeadm.go:883] updating cluster {Name:embed-certs-388383 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-388383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:26:04.009462   66989 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:26:04.009501   66989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:04.051583   66989 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 20:26:04.051643   66989 ssh_runner.go:195] Run: which lz4
	I0829 20:26:04.055929   66989 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 20:26:04.060214   66989 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 20:26:04.060240   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 20:26:03.867691   67607 main.go:141] libmachine: (old-k8s-version-032002) Waiting to get IP...
	I0829 20:26:03.868798   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:03.869246   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:03.869318   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:03.869235   68552 retry.go:31] will retry after 220.928648ms: waiting for machine to come up
	I0829 20:26:04.091675   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:04.092057   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:04.092084   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:04.092020   68552 retry.go:31] will retry after 352.781755ms: waiting for machine to come up
	I0829 20:26:04.446766   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:04.447277   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:04.447301   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:04.447224   68552 retry.go:31] will retry after 480.96031ms: waiting for machine to come up
	I0829 20:26:04.929561   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:04.930149   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:04.930181   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:04.930051   68552 retry.go:31] will retry after 415.057247ms: waiting for machine to come up
	I0829 20:26:05.346757   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:05.347224   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:05.347258   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:05.347196   68552 retry.go:31] will retry after 609.958508ms: waiting for machine to come up
	I0829 20:26:05.959227   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:05.959774   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:05.959825   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:05.959702   68552 retry.go:31] will retry after 680.801337ms: waiting for machine to come up
	I0829 20:26:06.642811   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:06.643312   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:06.643343   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:06.643269   68552 retry.go:31] will retry after 995.561322ms: waiting for machine to come up
	I0829 20:26:07.640147   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:07.640617   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:07.640652   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:07.640588   68552 retry.go:31] will retry after 1.22436043s: waiting for machine to come up
	I0829 20:26:05.472272   66989 crio.go:462] duration metric: took 1.416373513s to copy over tarball
	I0829 20:26:05.472355   66989 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 20:26:07.583560   66989 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.111164398s)
	I0829 20:26:07.583595   66989 crio.go:469] duration metric: took 2.111297179s to extract the tarball
	I0829 20:26:07.583605   66989 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 20:26:07.622447   66989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:07.671704   66989 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 20:26:07.671732   66989 cache_images.go:84] Images are preloaded, skipping loading
	I0829 20:26:07.671742   66989 kubeadm.go:934] updating node { 192.168.61.202 8443 v1.31.0 crio true true} ...
	I0829 20:26:07.671869   66989 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-388383 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-388383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:26:07.671958   66989 ssh_runner.go:195] Run: crio config
	I0829 20:26:07.717217   66989 cni.go:84] Creating CNI manager for ""
	I0829 20:26:07.717242   66989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:07.717263   66989 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:26:07.717290   66989 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.202 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-388383 NodeName:embed-certs-388383 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 20:26:07.717465   66989 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-388383"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:26:07.717549   66989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 20:26:07.727174   66989 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:26:07.727258   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:26:07.736512   66989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0829 20:26:07.752727   66989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:26:07.772430   66989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0829 20:26:07.793343   66989 ssh_runner.go:195] Run: grep 192.168.61.202	control-plane.minikube.internal$ /etc/hosts
	I0829 20:26:07.798214   66989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:07.811285   66989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:07.927025   66989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:07.943741   66989 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383 for IP: 192.168.61.202
	I0829 20:26:07.943765   66989 certs.go:194] generating shared ca certs ...
	I0829 20:26:07.943784   66989 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:07.943984   66989 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:26:07.944047   66989 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:26:07.944061   66989 certs.go:256] generating profile certs ...
	I0829 20:26:07.944177   66989 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/client.key
	I0829 20:26:07.944254   66989 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/apiserver.key.03b29390
	I0829 20:26:07.944317   66989 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/proxy-client.key
	I0829 20:26:07.944494   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:26:07.944538   66989 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:26:07.944551   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:26:07.944581   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:26:07.944605   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:26:07.944628   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:26:07.944670   66989 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:07.945252   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:26:07.971277   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:26:08.012892   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:26:08.042038   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:26:08.067708   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0829 20:26:08.095930   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 20:26:08.127171   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:26:08.151287   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/embed-certs-388383/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 20:26:08.175525   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:26:08.199076   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:26:08.222783   66989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:26:08.245783   66989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:26:08.261839   66989 ssh_runner.go:195] Run: openssl version
	I0829 20:26:08.267545   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:26:08.278347   66989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:26:08.284232   66989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:26:08.284283   66989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:26:08.292024   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:26:08.306831   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:26:08.320607   66989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:26:08.325027   66989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:26:08.325070   66989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:26:08.330808   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:26:08.341457   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:26:08.352323   66989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:08.356822   66989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:08.356891   66989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:08.362617   66989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:26:08.373755   66989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:26:08.378153   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:26:08.384225   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:26:08.390136   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:26:08.396002   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:26:08.401713   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:26:08.407437   66989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:26:08.413033   66989 kubeadm.go:392] StartCluster: {Name:embed-certs-388383 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-388383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:26:08.413119   66989 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:26:08.413173   66989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:08.450685   66989 cri.go:89] found id: ""
	I0829 20:26:08.450757   66989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:26:08.460787   66989 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:26:08.460809   66989 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:26:08.460853   66989 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:26:08.470179   66989 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:26:08.471673   66989 kubeconfig.go:125] found "embed-certs-388383" server: "https://192.168.61.202:8443"
	I0829 20:26:08.474839   66989 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:26:08.483951   66989 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.202
	I0829 20:26:08.483992   66989 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:26:08.484007   66989 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:26:08.484085   66989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:08.525947   66989 cri.go:89] found id: ""
	I0829 20:26:08.526013   66989 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:26:08.541862   66989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:26:08.551179   66989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:26:08.551200   66989 kubeadm.go:157] found existing configuration files:
	
	I0829 20:26:08.551249   66989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:26:08.559897   66989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:26:08.559970   66989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:26:08.569317   66989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:26:08.577858   66989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:26:08.577905   66989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:26:08.587113   66989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:26:08.595645   66989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:26:08.595705   66989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:26:08.604803   66989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:26:08.613070   66989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:26:08.613125   66989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:26:08.622037   66989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:26:08.631330   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:08.742682   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:08.866518   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:08.866954   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:08.866985   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:08.866896   68552 retry.go:31] will retry after 1.707701085s: waiting for machine to come up
	I0829 20:26:10.576676   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:10.577094   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:10.577124   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:10.577047   68552 retry.go:31] will retry after 1.496799212s: waiting for machine to come up
	I0829 20:26:12.075964   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:12.076412   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:12.076451   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:12.076377   68552 retry.go:31] will retry after 2.246779697s: waiting for machine to come up
	I0829 20:26:09.809078   66989 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.066360218s)
	I0829 20:26:09.809118   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:10.027517   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:10.095959   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:10.199656   66989 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:26:10.199745   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:10.700569   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:11.200798   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:11.700664   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:12.200052   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:12.700839   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:12.715319   66989 api_server.go:72] duration metric: took 2.515661322s to wait for apiserver process to appear ...
	I0829 20:26:12.715351   66989 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:26:12.715374   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:15.687527   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:26:15.687558   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:26:15.687572   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:15.716339   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:26:15.716365   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:26:15.716378   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:15.750700   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:15.750732   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:16.216255   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:16.224376   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:16.224401   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:16.715457   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:16.723983   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:16.724004   66989 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:17.215562   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:26:17.219605   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0829 20:26:17.225473   66989 api_server.go:141] control plane version: v1.31.0
	I0829 20:26:17.225496   66989 api_server.go:131] duration metric: took 4.510137186s to wait for apiserver health ...
	I0829 20:26:17.225504   66989 cni.go:84] Creating CNI manager for ""
	I0829 20:26:17.225509   66989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:17.227379   66989 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:26:14.324452   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:14.324770   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:14.324808   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:14.324748   68552 retry.go:31] will retry after 3.172592587s: waiting for machine to come up
	I0829 20:26:17.500203   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:17.500540   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | unable to find current IP address of domain old-k8s-version-032002 in network mk-old-k8s-version-032002
	I0829 20:26:17.500573   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | I0829 20:26:17.500485   68552 retry.go:31] will retry after 2.81386002s: waiting for machine to come up
	I0829 20:26:17.228505   66989 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:26:17.238762   66989 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:26:17.264380   66989 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:26:17.274981   66989 system_pods.go:59] 8 kube-system pods found
	I0829 20:26:17.275009   66989 system_pods.go:61] "coredns-6f6b679f8f-dg6t6" [92e89b20-ebf4-4738-8ca7-9dc2a0e5653a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:26:17.275016   66989 system_pods.go:61] "etcd-embed-certs-388383" [a688325a-9ed2-488d-a1a1-aa440e37fa9f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 20:26:17.275023   66989 system_pods.go:61] "kube-apiserver-embed-certs-388383" [7a1b715b-87a3-44e0-868d-a3184f5b9f61] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 20:26:17.275028   66989 system_pods.go:61] "kube-controller-manager-embed-certs-388383" [9d942083-4d39-448c-8151-424ea9d5e6af] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 20:26:17.275033   66989 system_pods.go:61] "kube-proxy-fcxs4" [649b40c8-4f4b-40d1-8179-baf378d4c7d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 20:26:17.275038   66989 system_pods.go:61] "kube-scheduler-embed-certs-388383" [87b73013-dfad-411d-aaa9-f2c0e39fb920] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 20:26:17.275043   66989 system_pods.go:61] "metrics-server-6867b74b74-mx5jh" [99e21acd-b7b8-4e6f-8c75-c112206aed89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:26:17.275048   66989 system_pods.go:61] "storage-provisioner" [021ca156-b7a8-4647-8efe-db17968fd5a8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 20:26:17.275056   66989 system_pods.go:74] duration metric: took 10.656426ms to wait for pod list to return data ...
	I0829 20:26:17.275074   66989 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:26:17.279480   66989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:26:17.279504   66989 node_conditions.go:123] node cpu capacity is 2
	I0829 20:26:17.279519   66989 node_conditions.go:105] duration metric: took 4.439469ms to run NodePressure ...
	I0829 20:26:17.279537   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:17.561282   66989 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 20:26:17.565287   66989 kubeadm.go:739] kubelet initialised
	I0829 20:26:17.565307   66989 kubeadm.go:740] duration metric: took 4.002605ms waiting for restarted kubelet to initialise ...
	I0829 20:26:17.565314   66989 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:26:17.570104   66989 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:17.576425   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.576454   66989 pod_ready.go:82] duration metric: took 6.324083ms for pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:17.576464   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.576474   66989 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:17.582501   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "etcd-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.582523   66989 pod_ready.go:82] duration metric: took 6.040325ms for pod "etcd-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:17.582547   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "etcd-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.582556   66989 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:17.588534   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.588554   66989 pod_ready.go:82] duration metric: took 5.988678ms for pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:17.588562   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.588568   66989 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:17.668334   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.668365   66989 pod_ready.go:82] duration metric: took 79.787211ms for pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:17.668378   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:17.668386   66989 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fcxs4" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:18.068248   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "kube-proxy-fcxs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.068286   66989 pod_ready.go:82] duration metric: took 399.880238ms for pod "kube-proxy-fcxs4" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:18.068299   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "kube-proxy-fcxs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.068308   66989 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:18.468096   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.468126   66989 pod_ready.go:82] duration metric: took 399.810823ms for pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:18.468134   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.468141   66989 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:18.868444   66989 pod_ready.go:98] node "embed-certs-388383" hosting pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.868478   66989 pod_ready.go:82] duration metric: took 400.329102ms for pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace to be "Ready" ...
	E0829 20:26:18.868490   66989 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-388383" hosting pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:18.868499   66989 pod_ready.go:39] duration metric: took 1.303176044s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:26:18.868519   66989 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 20:26:18.880892   66989 ops.go:34] apiserver oom_adj: -16
	I0829 20:26:18.880916   66989 kubeadm.go:597] duration metric: took 10.42010114s to restartPrimaryControlPlane
	I0829 20:26:18.880925   66989 kubeadm.go:394] duration metric: took 10.467899141s to StartCluster
	I0829 20:26:18.880946   66989 settings.go:142] acquiring lock: {Name:mka4cd5ddff5796cd0ca11509c181178f4f73529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:18.881032   66989 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:26:18.884130   66989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:18.884619   66989 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 20:26:18.884674   66989 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 20:26:18.884749   66989 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-388383"
	I0829 20:26:18.884765   66989 addons.go:69] Setting default-storageclass=true in profile "embed-certs-388383"
	I0829 20:26:18.884783   66989 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-388383"
	W0829 20:26:18.884792   66989 addons.go:243] addon storage-provisioner should already be in state true
	I0829 20:26:18.884804   66989 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-388383"
	I0829 20:26:18.884816   66989 addons.go:69] Setting metrics-server=true in profile "embed-certs-388383"
	I0829 20:26:18.884828   66989 host.go:66] Checking if "embed-certs-388383" exists ...
	I0829 20:26:18.884856   66989 addons.go:234] Setting addon metrics-server=true in "embed-certs-388383"
	W0829 20:26:18.884877   66989 addons.go:243] addon metrics-server should already be in state true
	I0829 20:26:18.884884   66989 config.go:182] Loaded profile config "embed-certs-388383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:26:18.884912   66989 host.go:66] Checking if "embed-certs-388383" exists ...
	I0829 20:26:18.885134   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.885176   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.885216   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.885249   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.885291   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.885338   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.886484   66989 out.go:177] * Verifying Kubernetes components...
	I0829 20:26:18.887938   66989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:18.900910   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33641
	I0829 20:26:18.901377   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.901917   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.901938   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.902300   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.903062   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.903110   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.903810   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I0829 20:26:18.903824   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38101
	I0829 20:26:18.904282   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.904303   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.904673   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.904691   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.904829   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.904845   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.905017   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.905428   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.905462   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.905664   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.905860   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:26:18.909388   66989 addons.go:234] Setting addon default-storageclass=true in "embed-certs-388383"
	W0829 20:26:18.909408   66989 addons.go:243] addon default-storageclass should already be in state true
	I0829 20:26:18.909437   66989 host.go:66] Checking if "embed-certs-388383" exists ...
	I0829 20:26:18.909793   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.909839   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.921180   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35467
	I0829 20:26:18.921597   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.922074   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.922087   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.922470   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.922697   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:26:18.922725   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39123
	I0829 20:26:18.923052   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.923592   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.923610   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.923919   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.924057   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:26:18.924063   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45681
	I0829 20:26:18.924461   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.924519   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:18.924984   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.925002   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.925632   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.925682   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:18.926152   66989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:18.926194   66989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:18.926494   66989 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:18.927266   66989 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 20:26:18.928130   66989 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:26:18.928141   66989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 20:26:18.928155   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:18.928843   66989 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 20:26:18.928863   66989 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 20:26:18.928888   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:18.931716   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.932273   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:18.932296   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.932424   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.932456   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:18.932644   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:18.932810   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:18.932869   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:18.932891   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.933050   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:18.933100   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:18.933271   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:18.933426   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:18.933598   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:18.942718   66989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38109
	I0829 20:26:18.943150   66989 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:18.943532   66989 main.go:141] libmachine: Using API Version  1
	I0829 20:26:18.943553   66989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:18.943908   66989 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:18.944027   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetState
	I0829 20:26:18.945304   66989 main.go:141] libmachine: (embed-certs-388383) Calling .DriverName
	I0829 20:26:18.945498   66989 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 20:26:18.945510   66989 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 20:26:18.945522   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHHostname
	I0829 20:26:18.948108   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.948469   66989 main.go:141] libmachine: (embed-certs-388383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:5a:0c", ip: ""} in network mk-embed-certs-388383: {Iface:virbr4 ExpiryTime:2024-08-29 21:25:52 +0000 UTC Type:0 Mac:52:54:00:6c:5a:0c Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-388383 Clientid:01:52:54:00:6c:5a:0c}
	I0829 20:26:18.948494   66989 main.go:141] libmachine: (embed-certs-388383) DBG | domain embed-certs-388383 has defined IP address 192.168.61.202 and MAC address 52:54:00:6c:5a:0c in network mk-embed-certs-388383
	I0829 20:26:18.948730   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHPort
	I0829 20:26:18.948889   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHKeyPath
	I0829 20:26:18.949085   66989 main.go:141] libmachine: (embed-certs-388383) Calling .GetSSHUsername
	I0829 20:26:18.949222   66989 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/embed-certs-388383/id_rsa Username:docker}
	I0829 20:26:19.111953   66989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:19.131195   66989 node_ready.go:35] waiting up to 6m0s for node "embed-certs-388383" to be "Ready" ...
	I0829 20:26:19.246857   66989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:26:19.269511   66989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 20:26:19.269670   66989 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 20:26:19.269691   66989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 20:26:19.346200   66989 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 20:26:19.346234   66989 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 20:26:19.374530   66989 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:26:19.374566   66989 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 20:26:19.418474   66989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:26:20.495022   66989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.225476769s)
	I0829 20:26:20.495077   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.495090   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.495185   66989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.248286753s)
	I0829 20:26:20.495232   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.495249   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.495572   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.495600   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.495611   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.495619   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.495634   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.495663   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Closing plugin on server side
	I0829 20:26:20.495664   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.495678   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.495688   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.496014   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.496029   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.496061   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Closing plugin on server side
	I0829 20:26:20.496097   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.496111   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.504149   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.504182   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.504419   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.504436   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.519341   66989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.100829284s)
	I0829 20:26:20.519396   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.519422   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.519670   66989 main.go:141] libmachine: (embed-certs-388383) DBG | Closing plugin on server side
	I0829 20:26:20.519716   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.519734   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.519746   66989 main.go:141] libmachine: Making call to close driver server
	I0829 20:26:20.519755   66989 main.go:141] libmachine: (embed-certs-388383) Calling .Close
	I0829 20:26:20.520040   66989 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:26:20.520055   66989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:26:20.520072   66989 addons.go:475] Verifying addon metrics-server=true in "embed-certs-388383"
	I0829 20:26:20.523102   66989 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0829 20:26:21.515365   68084 start.go:364] duration metric: took 2m4.795762476s to acquireMachinesLock for "default-k8s-diff-port-145096"
	I0829 20:26:21.515428   68084 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:26:21.515439   68084 fix.go:54] fixHost starting: 
	I0829 20:26:21.515864   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:21.515904   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:21.535441   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33171
	I0829 20:26:21.535886   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:21.536390   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:26:21.536414   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:21.536819   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:21.537035   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:21.537203   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:26:21.538735   68084 fix.go:112] recreateIfNeeded on default-k8s-diff-port-145096: state=Stopped err=<nil>
	I0829 20:26:21.538762   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	W0829 20:26:21.538901   68084 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:26:21.540852   68084 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-145096" ...
	I0829 20:26:21.542258   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Start
	I0829 20:26:21.542429   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Ensuring networks are active...
	I0829 20:26:21.543181   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Ensuring network default is active
	I0829 20:26:21.543522   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Ensuring network mk-default-k8s-diff-port-145096 is active
	I0829 20:26:21.543872   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Getting domain xml...
	I0829 20:26:21.544627   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Creating domain...
	I0829 20:26:20.317138   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.317672   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has current primary IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.317700   67607 main.go:141] libmachine: (old-k8s-version-032002) Found IP for machine: 192.168.39.116
	I0829 20:26:20.317716   67607 main.go:141] libmachine: (old-k8s-version-032002) Reserving static IP address...
	I0829 20:26:20.318143   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "old-k8s-version-032002", mac: "52:54:00:a8:ca:96", ip: "192.168.39.116"} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.318169   67607 main.go:141] libmachine: (old-k8s-version-032002) Reserved static IP address: 192.168.39.116
	I0829 20:26:20.318189   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | skip adding static IP to network mk-old-k8s-version-032002 - found existing host DHCP lease matching {name: "old-k8s-version-032002", mac: "52:54:00:a8:ca:96", ip: "192.168.39.116"}
	I0829 20:26:20.318208   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | Getting to WaitForSSH function...
	I0829 20:26:20.318217   67607 main.go:141] libmachine: (old-k8s-version-032002) Waiting for SSH to be available...
	I0829 20:26:20.320598   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.320961   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.320989   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.321082   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | Using SSH client type: external
	I0829 20:26:20.321121   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa (-rw-------)
	I0829 20:26:20.321156   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:20.321171   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | About to run SSH command:
	I0829 20:26:20.321185   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | exit 0
	I0829 20:26:20.446805   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:20.447204   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetConfigRaw
	I0829 20:26:20.447944   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:20.450726   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.451120   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.451160   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.451464   67607 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/config.json ...
	I0829 20:26:20.451670   67607 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:20.451690   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:20.451886   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.454120   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.454496   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.454566   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.454648   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.454808   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.454975   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.455123   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.455282   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:20.455520   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:20.455533   67607 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:20.555074   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:20.555100   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetMachineName
	I0829 20:26:20.555331   67607 buildroot.go:166] provisioning hostname "old-k8s-version-032002"
	I0829 20:26:20.555353   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetMachineName
	I0829 20:26:20.555540   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.558576   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.559058   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.559086   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.559273   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.559490   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.559661   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.559834   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.560026   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:20.560189   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:20.560201   67607 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-032002 && echo "old-k8s-version-032002" | sudo tee /etc/hostname
	I0829 20:26:20.675352   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-032002
	
	I0829 20:26:20.675400   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.678472   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.678908   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.678944   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.679139   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.679341   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.679533   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.679710   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.679884   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:20.680090   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:20.680108   67607 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-032002' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-032002/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-032002' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:20.789673   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:20.789713   67607 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:20.789744   67607 buildroot.go:174] setting up certificates
	I0829 20:26:20.789753   67607 provision.go:84] configureAuth start
	I0829 20:26:20.789761   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetMachineName
	I0829 20:26:20.790067   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:20.792822   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.793152   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.793173   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.793338   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.795624   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.795948   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.795974   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.796080   67607 provision.go:143] copyHostCerts
	I0829 20:26:20.796148   67607 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:20.796168   67607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:20.796236   67607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:20.796344   67607 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:20.796355   67607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:20.796387   67607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:20.796467   67607 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:20.796476   67607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:20.796503   67607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:20.796573   67607 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-032002 san=[127.0.0.1 192.168.39.116 localhost minikube old-k8s-version-032002]
	I0829 20:26:20.906382   67607 provision.go:177] copyRemoteCerts
	I0829 20:26:20.906436   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:20.906466   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:20.909180   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.909488   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:20.909519   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:20.909666   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:20.909831   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:20.909963   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:20.910062   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:20.989017   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:21.018571   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0829 20:26:21.043015   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 20:26:21.067288   67607 provision.go:87] duration metric: took 277.522292ms to configureAuth
	I0829 20:26:21.067322   67607 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:21.067527   67607 config.go:182] Loaded profile config "old-k8s-version-032002": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0829 20:26:21.067607   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.070264   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.070642   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.070679   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.070881   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.071088   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.071288   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.071465   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.071661   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:21.071886   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:21.071923   67607 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:21.290979   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:21.291003   67607 machine.go:96] duration metric: took 839.319831ms to provisionDockerMachine
	I0829 20:26:21.291014   67607 start.go:293] postStartSetup for "old-k8s-version-032002" (driver="kvm2")
	I0829 20:26:21.291026   67607 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:21.291046   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.291342   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:21.291366   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.293946   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.294245   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.294273   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.294464   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.294686   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.294840   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.294964   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:21.373592   67607 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:21.377797   67607 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:21.377826   67607 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:21.377892   67607 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:21.377966   67607 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:21.378054   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:21.387886   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:21.413456   67607 start.go:296] duration metric: took 122.429334ms for postStartSetup
	I0829 20:26:21.413497   67607 fix.go:56] duration metric: took 18.810093949s for fixHost
	I0829 20:26:21.413522   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.416095   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.416391   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.416418   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.416594   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.416803   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.416970   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.417115   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.417272   67607 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:21.417474   67607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0829 20:26:21.417489   67607 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:21.515167   67607 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963181.486447470
	
	I0829 20:26:21.515190   67607 fix.go:216] guest clock: 1724963181.486447470
	I0829 20:26:21.515200   67607 fix.go:229] Guest: 2024-08-29 20:26:21.48644747 +0000 UTC Remote: 2024-08-29 20:26:21.413502498 +0000 UTC m=+222.629982255 (delta=72.944972ms)
	I0829 20:26:21.515225   67607 fix.go:200] guest clock delta is within tolerance: 72.944972ms
	I0829 20:26:21.515232   67607 start.go:83] releasing machines lock for "old-k8s-version-032002", held for 18.911866017s
	I0829 20:26:21.515278   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.515596   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:21.518247   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.518682   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.518710   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.518835   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.519413   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.519589   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .DriverName
	I0829 20:26:21.519680   67607 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:21.519736   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.519843   67607 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:21.519869   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHHostname
	I0829 20:26:21.522261   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.522561   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.522614   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.522643   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.522763   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.522919   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.523044   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:21.523071   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.523073   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:21.523241   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHPort
	I0829 20:26:21.523240   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:21.523413   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHKeyPath
	I0829 20:26:21.523560   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetSSHUsername
	I0829 20:26:21.523712   67607 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/old-k8s-version-032002/id_rsa Username:docker}
	I0829 20:26:21.599524   67607 ssh_runner.go:195] Run: systemctl --version
	I0829 20:26:21.629122   67607 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:26:21.778437   67607 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:26:21.784642   67607 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:26:21.784714   67607 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:26:21.802019   67607 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:26:21.802043   67607 start.go:495] detecting cgroup driver to use...
	I0829 20:26:21.802100   67607 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:26:21.817407   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:26:21.831514   67607 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:26:21.831578   67607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:26:21.845224   67607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:26:21.858522   67607 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:26:21.972769   67607 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:26:22.115154   67607 docker.go:233] disabling docker service ...
	I0829 20:26:22.115240   67607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:26:22.130015   67607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:26:22.143186   67607 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:26:22.294113   67607 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:26:22.432373   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:26:22.446427   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:26:22.465151   67607 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0829 20:26:22.465218   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.476104   67607 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:26:22.476177   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.486627   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.497782   67607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:22.509869   67607 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:26:22.521347   67607 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:26:22.531406   67607 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:26:22.531455   67607 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:26:22.544949   67607 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:26:22.554918   67607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:22.687909   67607 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:26:22.808522   67607 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:26:22.808595   67607 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:26:22.814348   67607 start.go:563] Will wait 60s for crictl version
	I0829 20:26:22.814411   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:22.818348   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:26:22.863797   67607 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:26:22.863883   67607 ssh_runner.go:195] Run: crio --version
	I0829 20:26:22.893173   67607 ssh_runner.go:195] Run: crio --version
	I0829 20:26:22.923146   67607 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0829 20:26:22.924299   67607 main.go:141] libmachine: (old-k8s-version-032002) Calling .GetIP
	I0829 20:26:22.927222   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:22.927564   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:96", ip: ""} in network mk-old-k8s-version-032002: {Iface:virbr1 ExpiryTime:2024-08-29 21:26:14 +0000 UTC Type:0 Mac:52:54:00:a8:ca:96 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:old-k8s-version-032002 Clientid:01:52:54:00:a8:ca:96}
	I0829 20:26:22.927589   67607 main.go:141] libmachine: (old-k8s-version-032002) DBG | domain old-k8s-version-032002 has defined IP address 192.168.39.116 and MAC address 52:54:00:a8:ca:96 in network mk-old-k8s-version-032002
	I0829 20:26:22.927772   67607 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 20:26:22.932100   67607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:22.945139   67607 kubeadm.go:883] updating cluster {Name:old-k8s-version-032002 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:26:22.945274   67607 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 20:26:22.945334   67607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:22.990592   67607 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 20:26:22.990668   67607 ssh_runner.go:195] Run: which lz4
	I0829 20:26:22.995104   67607 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 20:26:22.999667   67607 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 20:26:22.999703   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0829 20:26:20.524280   66989 addons.go:510] duration metric: took 1.639608208s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0829 20:26:21.135090   66989 node_ready.go:53] node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:23.136839   66989 node_ready.go:53] node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:22.825998   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting to get IP...
	I0829 20:26:22.827278   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:22.827766   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:22.827883   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:22.827750   68757 retry.go:31] will retry after 212.207753ms: waiting for machine to come up
	I0829 20:26:23.041113   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.041553   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.041588   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:23.041508   68757 retry.go:31] will retry after 291.9464ms: waiting for machine to come up
	I0829 20:26:23.335081   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.336072   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.336121   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:23.336041   68757 retry.go:31] will retry after 478.578755ms: waiting for machine to come up
	I0829 20:26:23.816669   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.817178   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:23.817233   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:23.817087   68757 retry.go:31] will retry after 501.093836ms: waiting for machine to come up
	I0829 20:26:24.319836   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:24.320392   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:24.320418   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:24.320343   68757 retry.go:31] will retry after 524.430407ms: waiting for machine to come up
	I0829 20:26:24.846908   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:24.847388   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:24.847418   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:24.847361   68757 retry.go:31] will retry after 701.573237ms: waiting for machine to come up
	I0829 20:26:25.550328   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:25.550786   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:25.550811   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:25.550727   68757 retry.go:31] will retry after 916.084079ms: waiting for machine to come up
	I0829 20:26:26.468529   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:26.468981   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:26.469012   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:26.468921   68757 retry.go:31] will retry after 1.216322833s: waiting for machine to come up
	I0829 20:26:24.727216   67607 crio.go:462] duration metric: took 1.732148589s to copy over tarball
	I0829 20:26:24.727294   67607 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 20:26:27.715640   67607 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.988318238s)
	I0829 20:26:27.715664   67607 crio.go:469] duration metric: took 2.988419957s to extract the tarball
	I0829 20:26:27.715672   67607 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 20:26:27.764192   67607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:27.797388   67607 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 20:26:27.797422   67607 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 20:26:27.797501   67607 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:27.797536   67607 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0829 20:26:27.797549   67607 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:27.797557   67607 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0829 20:26:27.797511   67607 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:27.797629   67607 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:27.797637   67607 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:27.797519   67607 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:27.799128   67607 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:27.799208   67607 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0829 20:26:27.799251   67607 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0829 20:26:27.799361   67607 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:27.799386   67607 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:27.799463   67607 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:27.799697   67607 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:27.799830   67607 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:27.978022   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:27.978296   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:27.981616   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:27.998987   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.001078   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.004185   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.004672   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0829 20:26:28.103885   67607 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0829 20:26:28.103953   67607 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.104013   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.122203   67607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:26:28.129983   67607 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0829 20:26:28.130028   67607 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.130076   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.165427   67607 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0829 20:26:28.165470   67607 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.165521   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.199971   67607 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0829 20:26:28.199990   67607 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0829 20:26:28.200015   67607 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.200021   67607 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.200062   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.200105   67607 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0829 20:26:28.200155   67607 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.200199   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.200204   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.200062   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.200113   67607 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0829 20:26:28.200325   67607 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0829 20:26:28.200356   67607 ssh_runner.go:195] Run: which crictl
	I0829 20:26:28.329091   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.329139   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.329187   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.329260   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.329316   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.329362   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:26:28.329316   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.484805   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.484857   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.484888   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.484943   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:26:28.484963   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.485009   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.487351   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 20:26:28.615121   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 20:26:28.615187   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 20:26:28.645371   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 20:26:28.645433   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 20:26:28.645524   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 20:26:28.645573   67607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 20:26:28.645638   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0829 20:26:28.729141   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0829 20:26:28.762530   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0829 20:26:28.762592   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0829 20:26:28.782117   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0829 20:26:28.782155   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0829 20:26:28.782195   67607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0829 20:26:28.782229   67607 cache_images.go:92] duration metric: took 984.791099ms to LoadCachedImages
	W0829 20:26:28.782293   67607 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0829 20:26:28.782310   67607 kubeadm.go:934] updating node { 192.168.39.116 8443 v1.20.0 crio true true} ...
	I0829 20:26:28.782452   67607 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-032002 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:26:28.782518   67607 ssh_runner.go:195] Run: crio config
	I0829 20:26:25.635616   66989 node_ready.go:53] node "embed-certs-388383" has status "Ready":"False"
	I0829 20:26:26.635463   66989 node_ready.go:49] node "embed-certs-388383" has status "Ready":"True"
	I0829 20:26:26.635488   66989 node_ready.go:38] duration metric: took 7.504259002s for node "embed-certs-388383" to be "Ready" ...
	I0829 20:26:26.635497   66989 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:26:26.641316   66989 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:26.649602   66989 pod_ready.go:93] pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:26.649634   66989 pod_ready.go:82] duration metric: took 8.284428ms for pod "coredns-6f6b679f8f-dg6t6" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:26.649656   66989 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:28.658281   66989 pod_ready.go:103] pod "etcd-embed-certs-388383" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:27.686642   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:27.687071   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:27.687097   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:27.687030   68757 retry.go:31] will retry after 1.410599528s: waiting for machine to come up
	I0829 20:26:29.099622   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:29.100175   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:29.100207   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:29.100083   68757 retry.go:31] will retry after 1.929618787s: waiting for machine to come up
	I0829 20:26:31.031864   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:31.032434   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:31.032467   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:31.032367   68757 retry.go:31] will retry after 1.926271655s: waiting for machine to come up
	I0829 20:26:28.832785   67607 cni.go:84] Creating CNI manager for ""
	I0829 20:26:28.832807   67607 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:28.832824   67607 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:26:28.832843   67607 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.116 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-032002 NodeName:old-k8s-version-032002 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0829 20:26:28.832982   67607 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-032002"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:26:28.833059   67607 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0829 20:26:28.843483   67607 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:26:28.843566   67607 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:26:28.853276   67607 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0829 20:26:28.870579   67607 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:26:28.888053   67607 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0829 20:26:28.905988   67607 ssh_runner.go:195] Run: grep 192.168.39.116	control-plane.minikube.internal$ /etc/hosts
	I0829 20:26:28.910048   67607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:28.924996   67607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:29.075015   67607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:29.095381   67607 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002 for IP: 192.168.39.116
	I0829 20:26:29.095411   67607 certs.go:194] generating shared ca certs ...
	I0829 20:26:29.095430   67607 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:29.095605   67607 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:26:29.095686   67607 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:26:29.095706   67607 certs.go:256] generating profile certs ...
	I0829 20:26:29.095847   67607 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.key
	I0829 20:26:29.095928   67607 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.key.a1a2aebb
	I0829 20:26:29.095984   67607 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.key
	I0829 20:26:29.096135   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:26:29.096184   67607 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:26:29.096198   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:26:29.096227   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:26:29.096259   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:26:29.096299   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:26:29.096378   67607 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:29.097276   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:26:29.144259   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:26:29.171420   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:26:29.198554   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:26:29.230750   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0829 20:26:29.269978   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 20:26:29.299839   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:26:29.333742   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 20:26:29.358352   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:26:29.382648   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:26:29.406773   67607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:26:29.434106   67607 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:26:29.451913   67607 ssh_runner.go:195] Run: openssl version
	I0829 20:26:29.457722   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:26:29.469147   67607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:26:29.474048   67607 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:26:29.474094   67607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:26:29.480082   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:26:29.491083   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:26:29.501994   67607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:29.508594   67607 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:29.508643   67607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:29.516331   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:26:29.531067   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:26:29.543998   67607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:26:29.548781   67607 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:26:29.548845   67607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:26:29.555052   67607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:26:29.567902   67607 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:26:29.572879   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:26:29.579506   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:26:29.585887   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:26:29.592262   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:26:29.598566   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:26:29.604672   67607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:26:29.610830   67607 kubeadm.go:392] StartCluster: {Name:old-k8s-version-032002 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-032002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:26:29.612915   67607 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:26:29.613015   67607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:29.655224   67607 cri.go:89] found id: ""
	I0829 20:26:29.655314   67607 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:26:29.666216   67607 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:26:29.666241   67607 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:26:29.666292   67607 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:26:29.676908   67607 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:26:29.678276   67607 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-032002" does not appear in /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:26:29.679313   67607 kubeconfig.go:62] /home/jenkins/minikube-integration/19530-11185/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-032002" cluster setting kubeconfig missing "old-k8s-version-032002" context setting]
	I0829 20:26:29.680756   67607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:29.764872   67607 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:26:29.776873   67607 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.116
	I0829 20:26:29.776914   67607 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:26:29.776926   67607 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:26:29.776987   67607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:29.819268   67607 cri.go:89] found id: ""
	I0829 20:26:29.819347   67607 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:26:29.840386   67607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:26:29.851624   67607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:26:29.851650   67607 kubeadm.go:157] found existing configuration files:
	
	I0829 20:26:29.851710   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:26:29.861439   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:26:29.861504   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:26:29.871594   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:26:29.881126   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:26:29.881199   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:26:29.890984   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:26:29.900838   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:26:29.900913   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:26:29.910677   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:26:29.920008   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:26:29.920073   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:26:29.929631   67607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:26:29.939864   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:30.096029   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:30.816696   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:31.043310   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:31.139291   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:31.248095   67607 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:26:31.248190   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:31.749101   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:32.248718   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:32.748783   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:33.248254   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:33.748557   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:30.180025   66989 pod_ready.go:93] pod "etcd-embed-certs-388383" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:30.180056   66989 pod_ready.go:82] duration metric: took 3.530390258s for pod "etcd-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:30.180069   66989 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.187272   66989 pod_ready.go:93] pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:32.187300   66989 pod_ready.go:82] duration metric: took 2.007222016s for pod "kube-apiserver-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.187313   66989 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.192038   66989 pod_ready.go:93] pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:32.192062   66989 pod_ready.go:82] duration metric: took 4.740656ms for pod "kube-controller-manager-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.192075   66989 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fcxs4" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.196712   66989 pod_ready.go:93] pod "kube-proxy-fcxs4" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:32.196736   66989 pod_ready.go:82] duration metric: took 4.653538ms for pod "kube-proxy-fcxs4" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.196748   66989 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.200491   66989 pod_ready.go:93] pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:32.200517   66989 pod_ready.go:82] duration metric: took 3.758002ms for pod "kube-scheduler-embed-certs-388383" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:32.200528   66989 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:34.207857   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:32.960872   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:32.961256   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:32.961284   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:32.961208   68757 retry.go:31] will retry after 2.304628323s: waiting for machine to come up
	I0829 20:26:35.267593   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:35.268009   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | unable to find current IP address of domain default-k8s-diff-port-145096 in network mk-default-k8s-diff-port-145096
	I0829 20:26:35.268041   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | I0829 20:26:35.267970   68757 retry.go:31] will retry after 3.753063387s: waiting for machine to come up
	I0829 20:26:34.249231   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:34.748279   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:35.249171   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:35.748943   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:36.249181   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:36.748307   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:37.248484   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:37.748261   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:38.248332   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:38.748423   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:36.705814   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:38.708205   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:40.175557   66841 start.go:364] duration metric: took 53.54411059s to acquireMachinesLock for "no-preload-397724"
	I0829 20:26:40.175617   66841 start.go:96] Skipping create...Using existing machine configuration
	I0829 20:26:40.175626   66841 fix.go:54] fixHost starting: 
	I0829 20:26:40.176060   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:26:40.176098   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:26:40.193828   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45897
	I0829 20:26:40.194231   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:26:40.194840   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:26:40.194867   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:26:40.195175   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:26:40.195364   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:40.195528   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:26:40.197109   66841 fix.go:112] recreateIfNeeded on no-preload-397724: state=Stopped err=<nil>
	I0829 20:26:40.197128   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	W0829 20:26:40.197278   66841 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 20:26:40.199263   66841 out.go:177] * Restarting existing kvm2 VM for "no-preload-397724" ...
	I0829 20:26:39.023902   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.024374   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Found IP for machine: 192.168.72.140
	I0829 20:26:39.024399   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has current primary IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.024413   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Reserving static IP address...
	I0829 20:26:39.024832   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Reserved static IP address: 192.168.72.140
	I0829 20:26:39.024856   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Waiting for SSH to be available...
	I0829 20:26:39.024894   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-145096", mac: "52:54:00:36:fe:e0", ip: "192.168.72.140"} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.024925   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | skip adding static IP to network mk-default-k8s-diff-port-145096 - found existing host DHCP lease matching {name: "default-k8s-diff-port-145096", mac: "52:54:00:36:fe:e0", ip: "192.168.72.140"}
	I0829 20:26:39.024947   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Getting to WaitForSSH function...
	I0829 20:26:39.026796   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.027100   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.027129   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.027265   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Using SSH client type: external
	I0829 20:26:39.027288   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa (-rw-------)
	I0829 20:26:39.027318   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:39.027333   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | About to run SSH command:
	I0829 20:26:39.027346   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | exit 0
	I0829 20:26:39.146830   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:39.147242   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetConfigRaw
	I0829 20:26:39.147931   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetIP
	I0829 20:26:39.150652   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.151055   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.151084   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.151395   68084 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/config.json ...
	I0829 20:26:39.151581   68084 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:39.151601   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:39.151814   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.153861   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.154189   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.154222   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.154351   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.154575   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.154746   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.154875   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.155010   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:39.155219   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:39.155235   68084 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:39.258973   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:39.259006   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetMachineName
	I0829 20:26:39.259261   68084 buildroot.go:166] provisioning hostname "default-k8s-diff-port-145096"
	I0829 20:26:39.259292   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetMachineName
	I0829 20:26:39.259467   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.262018   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.262472   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.262501   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.262707   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.262886   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.263034   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.263185   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.263344   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:39.263530   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:39.263547   68084 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-145096 && echo "default-k8s-diff-port-145096" | sudo tee /etc/hostname
	I0829 20:26:39.379437   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-145096
	
	I0829 20:26:39.379479   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.382263   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.382682   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.382704   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.382913   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.383128   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.383280   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.383389   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.383520   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:39.383675   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:39.383692   68084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-145096' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-145096/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-145096' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:39.491756   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:39.491790   68084 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:39.491855   68084 buildroot.go:174] setting up certificates
	I0829 20:26:39.491869   68084 provision.go:84] configureAuth start
	I0829 20:26:39.491883   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetMachineName
	I0829 20:26:39.492150   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetIP
	I0829 20:26:39.494882   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.495241   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.495269   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.495452   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.497708   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.497980   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.498013   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.498097   68084 provision.go:143] copyHostCerts
	I0829 20:26:39.498157   68084 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:39.498179   68084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:39.498249   68084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:39.498347   68084 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:39.498356   68084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:39.498377   68084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:39.498430   68084 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:39.498437   68084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:39.498455   68084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:39.498507   68084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-145096 san=[127.0.0.1 192.168.72.140 default-k8s-diff-port-145096 localhost minikube]
	I0829 20:26:39.584313   68084 provision.go:177] copyRemoteCerts
	I0829 20:26:39.584372   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:39.584398   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.587054   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.587377   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.587400   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.587630   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.587823   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.587952   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.588087   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:26:39.664394   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:39.688852   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0829 20:26:39.714653   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 20:26:39.737662   68084 provision.go:87] duration metric: took 245.781265ms to configureAuth
	I0829 20:26:39.737687   68084 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:39.737844   68084 config.go:182] Loaded profile config "default-k8s-diff-port-145096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:26:39.737911   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.740391   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.740659   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.740688   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.740911   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.741107   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.741256   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.741434   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.741612   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:39.741777   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:39.741794   68084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:39.954811   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:39.954846   68084 machine.go:96] duration metric: took 803.251945ms to provisionDockerMachine
	I0829 20:26:39.954862   68084 start.go:293] postStartSetup for "default-k8s-diff-port-145096" (driver="kvm2")
	I0829 20:26:39.954877   68084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:39.954898   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:39.955237   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:39.955267   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:39.958071   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.958575   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:39.958605   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:39.958772   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:39.958969   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:39.959126   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:39.959287   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:26:40.037153   68084 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:40.041150   68084 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:40.041176   68084 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:40.041235   68084 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:40.041325   68084 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:40.041415   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:40.050654   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:40.073789   68084 start.go:296] duration metric: took 118.907407ms for postStartSetup
	I0829 20:26:40.073826   68084 fix.go:56] duration metric: took 18.558388385s for fixHost
	I0829 20:26:40.073846   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:40.076397   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.076749   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:40.076789   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.076999   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:40.077200   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:40.077374   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:40.077480   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:40.077598   68084 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:40.077754   68084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.140 22 <nil> <nil>}
	I0829 20:26:40.077765   68084 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:40.175410   68084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963200.123461148
	
	I0829 20:26:40.175431   68084 fix.go:216] guest clock: 1724963200.123461148
	I0829 20:26:40.175437   68084 fix.go:229] Guest: 2024-08-29 20:26:40.123461148 +0000 UTC Remote: 2024-08-29 20:26:40.073830105 +0000 UTC m=+143.488576066 (delta=49.631043ms)
	I0829 20:26:40.175456   68084 fix.go:200] guest clock delta is within tolerance: 49.631043ms
	I0829 20:26:40.175463   68084 start.go:83] releasing machines lock for "default-k8s-diff-port-145096", held for 18.660059953s
	I0829 20:26:40.175497   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:40.175781   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetIP
	I0829 20:26:40.179031   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.179457   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:40.179495   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.179695   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:40.180256   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:40.180444   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:26:40.180528   68084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:40.180581   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:40.180706   68084 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:40.180729   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:26:40.183580   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.183819   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.183963   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:40.183989   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.184172   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:40.184174   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:40.184213   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:40.184345   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:26:40.184416   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:40.184511   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:26:40.184624   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:40.184626   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:26:40.184794   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:26:40.184896   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:26:40.259854   68084 ssh_runner.go:195] Run: systemctl --version
	I0829 20:26:40.290102   68084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:26:40.439112   68084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:26:40.449465   68084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:26:40.449546   68084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:26:40.471182   68084 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:26:40.471209   68084 start.go:495] detecting cgroup driver to use...
	I0829 20:26:40.471276   68084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:26:40.492605   68084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:26:40.508500   68084 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:26:40.508561   68084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:26:40.527534   68084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:26:40.542013   68084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:26:40.663843   68084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:26:40.837228   68084 docker.go:233] disabling docker service ...
	I0829 20:26:40.837293   68084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:26:40.854285   68084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:26:40.870148   68084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:26:41.017156   68084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:26:41.150436   68084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:26:41.165239   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:26:41.184783   68084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 20:26:41.184847   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.197358   68084 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:26:41.197417   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.211222   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.225297   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.237205   68084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:26:41.249875   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.261928   68084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.286145   68084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:26:41.299119   68084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:26:41.313001   68084 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:26:41.313062   68084 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:26:41.335390   68084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:26:41.348803   68084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:41.464387   68084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:26:41.564675   68084 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:26:41.564746   68084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:26:41.569620   68084 start.go:563] Will wait 60s for crictl version
	I0829 20:26:41.569680   68084 ssh_runner.go:195] Run: which crictl
	I0829 20:26:41.573519   68084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:26:41.615105   68084 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:26:41.615190   68084 ssh_runner.go:195] Run: crio --version
	I0829 20:26:41.644597   68084 ssh_runner.go:195] Run: crio --version
	I0829 20:26:41.678211   68084 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 20:26:39.248306   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:39.748958   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:40.248975   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:40.748948   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:41.249144   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:41.749013   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:42.248363   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:42.748624   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:43.248833   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:43.748535   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:40.200748   66841 main.go:141] libmachine: (no-preload-397724) Calling .Start
	I0829 20:26:40.200955   66841 main.go:141] libmachine: (no-preload-397724) Ensuring networks are active...
	I0829 20:26:40.201793   66841 main.go:141] libmachine: (no-preload-397724) Ensuring network default is active
	I0829 20:26:40.202128   66841 main.go:141] libmachine: (no-preload-397724) Ensuring network mk-no-preload-397724 is active
	I0829 20:26:40.202729   66841 main.go:141] libmachine: (no-preload-397724) Getting domain xml...
	I0829 20:26:40.203538   66841 main.go:141] libmachine: (no-preload-397724) Creating domain...
	I0829 20:26:41.516739   66841 main.go:141] libmachine: (no-preload-397724) Waiting to get IP...
	I0829 20:26:41.517840   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:41.518273   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:41.518353   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:41.518262   68926 retry.go:31] will retry after 295.070588ms: waiting for machine to come up
	I0829 20:26:41.814782   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:41.815346   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:41.815369   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:41.815291   68926 retry.go:31] will retry after 239.48527ms: waiting for machine to come up
	I0829 20:26:42.056957   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:42.057459   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:42.057509   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:42.057436   68926 retry.go:31] will retry after 452.012872ms: waiting for machine to come up
	I0829 20:26:42.511068   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:42.511551   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:42.511590   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:42.511520   68926 retry.go:31] will retry after 552.227159ms: waiting for machine to come up
	I0829 20:26:43.066096   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:43.066642   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:43.066673   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:43.066605   68926 retry.go:31] will retry after 666.699647ms: waiting for machine to come up
	I0829 20:26:43.734695   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:43.735402   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:43.735430   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:43.735309   68926 retry.go:31] will retry after 770.756485ms: waiting for machine to come up
	I0829 20:26:40.709553   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:42.712799   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:41.679441   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetIP
	I0829 20:26:41.682807   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:41.683205   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:26:41.683236   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:26:41.683489   68084 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0829 20:26:41.688766   68084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:41.705764   68084 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-145096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:26:41.705918   68084 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:26:41.705977   68084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:41.752884   68084 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 20:26:41.752955   68084 ssh_runner.go:195] Run: which lz4
	I0829 20:26:41.757600   68084 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 20:26:41.762158   68084 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 20:26:41.762188   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 20:26:43.201094   68084 crio.go:462] duration metric: took 1.443534343s to copy over tarball
	I0829 20:26:43.201176   68084 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 20:26:45.400911   68084 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.199703125s)
	I0829 20:26:45.400942   68084 crio.go:469] duration metric: took 2.199820098s to extract the tarball
	I0829 20:26:45.400948   68084 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 20:26:45.439120   68084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:26:45.482658   68084 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 20:26:45.482679   68084 cache_images.go:84] Images are preloaded, skipping loading
	I0829 20:26:45.482687   68084 kubeadm.go:934] updating node { 192.168.72.140 8444 v1.31.0 crio true true} ...
	I0829 20:26:45.482801   68084 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-145096 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:26:45.482873   68084 ssh_runner.go:195] Run: crio config
	I0829 20:26:45.532108   68084 cni.go:84] Creating CNI manager for ""
	I0829 20:26:45.532132   68084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:45.532146   68084 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:26:45.532169   68084 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.140 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-145096 NodeName:default-k8s-diff-port-145096 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 20:26:45.532310   68084 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.140
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-145096"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:26:45.532367   68084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 20:26:45.542670   68084 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:26:45.542744   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:26:45.552622   68084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0829 20:26:45.569765   68084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:26:45.590972   68084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0829 20:26:45.611421   68084 ssh_runner.go:195] Run: grep 192.168.72.140	control-plane.minikube.internal$ /etc/hosts
	I0829 20:26:45.615585   68084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:26:45.627911   68084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:26:45.757504   68084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:26:45.776103   68084 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096 for IP: 192.168.72.140
	I0829 20:26:45.776128   68084 certs.go:194] generating shared ca certs ...
	I0829 20:26:45.776159   68084 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:26:45.776337   68084 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:26:45.776388   68084 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:26:45.776400   68084 certs.go:256] generating profile certs ...
	I0829 20:26:45.776511   68084 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/client.key
	I0829 20:26:45.776600   68084 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/apiserver.key.5a49b6b2
	I0829 20:26:45.776650   68084 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/proxy-client.key
	I0829 20:26:45.776788   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:26:45.776827   68084 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:26:45.776840   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:26:45.776869   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:26:45.776940   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:26:45.776977   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:26:45.777035   68084 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:45.777916   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:26:45.823419   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:26:45.868291   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:26:45.905178   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:26:45.934956   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0829 20:26:45.967570   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 20:26:45.994332   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:26:46.019268   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/default-k8s-diff-port-145096/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 20:26:46.044075   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:26:46.067906   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:26:46.092513   68084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:26:46.117686   68084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:26:46.137048   68084 ssh_runner.go:195] Run: openssl version
	I0829 20:26:46.143203   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:26:46.156407   68084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:46.161397   68084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:46.161461   68084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:26:46.167587   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:26:46.179034   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:26:46.190204   68084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:26:46.194953   68084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:26:46.195010   68084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:26:46.203121   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:26:46.218606   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:26:46.233586   68084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:26:46.240100   68084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:26:46.240155   68084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:26:46.247473   68084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:26:46.259417   68084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:26:46.264875   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:26:46.270914   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:26:46.277211   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:26:46.283138   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:26:46.289137   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:26:46.295044   68084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:26:46.301027   68084 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-145096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-145096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:26:46.301120   68084 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:26:46.301177   68084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:46.342913   68084 cri.go:89] found id: ""
	I0829 20:26:46.342988   68084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:26:46.354198   68084 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:26:46.354221   68084 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:26:46.354269   68084 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:26:46.364173   68084 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:26:46.365182   68084 kubeconfig.go:125] found "default-k8s-diff-port-145096" server: "https://192.168.72.140:8444"
	I0829 20:26:46.367560   68084 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:26:46.377550   68084 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.140
	I0829 20:26:46.377584   68084 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:26:46.377596   68084 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:26:46.377647   68084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:26:46.419141   68084 cri.go:89] found id: ""
	I0829 20:26:46.419215   68084 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:26:46.438037   68084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:26:46.449021   68084 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:26:46.449041   68084 kubeadm.go:157] found existing configuration files:
	
	I0829 20:26:46.449093   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0829 20:26:46.459396   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:26:46.459445   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:26:46.469964   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0829 20:26:46.479604   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:26:46.479655   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:26:46.492672   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0829 20:26:46.504656   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:26:46.504714   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:26:46.520206   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0829 20:26:46.532067   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:26:46.532137   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:26:46.541931   68084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:26:46.551973   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:44.248615   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:44.748528   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:45.248257   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:45.748453   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:46.248927   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:46.748628   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:47.248556   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:47.748332   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.248373   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.749111   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:44.507808   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:44.508340   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:44.508375   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:44.508288   68926 retry.go:31] will retry after 754.614285ms: waiting for machine to come up
	I0829 20:26:45.264587   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:45.265039   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:45.265065   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:45.265003   68926 retry.go:31] will retry after 1.3758308s: waiting for machine to come up
	I0829 20:26:46.642139   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:46.642666   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:46.642690   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:46.642612   68926 retry.go:31] will retry after 1.255043608s: waiting for machine to come up
	I0829 20:26:47.899849   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:47.900330   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:47.900360   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:47.900291   68926 retry.go:31] will retry after 1.517293529s: waiting for machine to come up
	I0829 20:26:45.208067   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:48.177040   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:46.668397   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:47.497182   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:47.725573   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:47.785427   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:47.850878   68084 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:26:47.850972   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.351404   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:48.852023   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.351402   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.367249   68084 api_server.go:72] duration metric: took 1.516370766s to wait for apiserver process to appear ...
	I0829 20:26:49.367283   68084 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:26:49.367312   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:51.595653   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:26:51.595683   68084 api_server.go:103] status: https://192.168.72.140:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:26:51.595698   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:51.609883   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:26:51.609989   68084 api_server.go:103] status: https://192.168.72.140:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:26:51.867454   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:51.872297   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:51.872328   68084 api_server.go:103] status: https://192.168.72.140:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:52.367462   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:52.375300   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:26:52.375333   68084 api_server.go:103] status: https://192.168.72.140:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:26:52.867827   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:26:52.872814   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 200:
	ok
	I0829 20:26:52.881061   68084 api_server.go:141] control plane version: v1.31.0
	I0829 20:26:52.881092   68084 api_server.go:131] duration metric: took 3.513801329s to wait for apiserver health ...
	I0829 20:26:52.881102   68084 cni.go:84] Creating CNI manager for ""
	I0829 20:26:52.881111   68084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:26:52.882993   68084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:26:49.248291   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.748360   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:50.248427   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:50.749087   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:51.248381   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:51.748488   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:52.249250   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:52.748715   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:53.249248   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:53.748915   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:49.419781   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:49.420286   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:49.420314   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:49.420244   68926 retry.go:31] will retry after 2.638145598s: waiting for machine to come up
	I0829 20:26:52.059935   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:52.060367   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:52.060411   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:52.060341   68926 retry.go:31] will retry after 2.696474949s: waiting for machine to come up
	I0829 20:26:50.207945   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:52.709407   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:52.884310   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:26:52.901134   68084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:26:52.931390   68084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:26:52.952109   68084 system_pods.go:59] 8 kube-system pods found
	I0829 20:26:52.952154   68084 system_pods.go:61] "coredns-6f6b679f8f-5mkxp" [1d3c3a01-1fa6-4d1d-8750-deef4475ba96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:26:52.952166   68084 system_pods.go:61] "etcd-default-k8s-diff-port-145096" [03096d69-48af-4372-9fa0-5a45dcb9603c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 20:26:52.952177   68084 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-145096" [4be8793a-7934-4c89-a840-49e769673f5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 20:26:52.952188   68084 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-145096" [a3bec7f8-8163-4afa-af53-282ad755b788] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 20:26:52.952202   68084 system_pods.go:61] "kube-proxy-b4ffx" [d97e74d5-21d4-4c96-9d94-77767fc4e609] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 20:26:52.952210   68084 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-145096" [c416b52b-ebf4-4714-bed6-3d25bfaa373c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 20:26:52.952217   68084 system_pods.go:61] "metrics-server-6867b74b74-5kk6q" [e74224b1-8242-4f7f-b8d6-7d9d4839be53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:26:52.952224   68084 system_pods.go:61] "storage-provisioner" [4e97da7c-af4b-40b3-83fb-82b6c2a2adef] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 20:26:52.952236   68084 system_pods.go:74] duration metric: took 20.81979ms to wait for pod list to return data ...
	I0829 20:26:52.952245   68084 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:26:52.961169   68084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:26:52.961202   68084 node_conditions.go:123] node cpu capacity is 2
	I0829 20:26:52.961214   68084 node_conditions.go:105] duration metric: took 8.963546ms to run NodePressure ...
	I0829 20:26:52.961234   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:26:53.425201   68084 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 20:26:53.429605   68084 kubeadm.go:739] kubelet initialised
	I0829 20:26:53.429625   68084 kubeadm.go:740] duration metric: took 4.401784ms waiting for restarted kubelet to initialise ...
	I0829 20:26:53.429632   68084 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:26:53.434501   68084 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-5mkxp" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:55.442290   68084 pod_ready.go:103] pod "coredns-6f6b679f8f-5mkxp" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:54.248998   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:54.748438   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:55.249066   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:55.749293   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:56.248457   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:56.748509   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:57.248949   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:57.748228   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:58.248717   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:58.748412   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:54.760175   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:54.760689   66841 main.go:141] libmachine: (no-preload-397724) DBG | unable to find current IP address of domain no-preload-397724 in network mk-no-preload-397724
	I0829 20:26:54.760736   66841 main.go:141] libmachine: (no-preload-397724) DBG | I0829 20:26:54.760667   68926 retry.go:31] will retry after 3.651969786s: waiting for machine to come up
	I0829 20:26:58.415601   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.416019   66841 main.go:141] libmachine: (no-preload-397724) Found IP for machine: 192.168.50.214
	I0829 20:26:58.416045   66841 main.go:141] libmachine: (no-preload-397724) Reserving static IP address...
	I0829 20:26:58.416063   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has current primary IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.416507   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "no-preload-397724", mac: "52:54:00:e9:bf:ac", ip: "192.168.50.214"} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.416533   66841 main.go:141] libmachine: (no-preload-397724) DBG | skip adding static IP to network mk-no-preload-397724 - found existing host DHCP lease matching {name: "no-preload-397724", mac: "52:54:00:e9:bf:ac", ip: "192.168.50.214"}
	I0829 20:26:58.416543   66841 main.go:141] libmachine: (no-preload-397724) Reserved static IP address: 192.168.50.214
	I0829 20:26:58.416552   66841 main.go:141] libmachine: (no-preload-397724) Waiting for SSH to be available...
	I0829 20:26:58.416562   66841 main.go:141] libmachine: (no-preload-397724) DBG | Getting to WaitForSSH function...
	I0829 20:26:58.418849   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.419170   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.419199   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.419312   66841 main.go:141] libmachine: (no-preload-397724) DBG | Using SSH client type: external
	I0829 20:26:58.419351   66841 main.go:141] libmachine: (no-preload-397724) DBG | Using SSH private key: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa (-rw-------)
	I0829 20:26:58.419397   66841 main.go:141] libmachine: (no-preload-397724) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 20:26:58.419414   66841 main.go:141] libmachine: (no-preload-397724) DBG | About to run SSH command:
	I0829 20:26:58.419444   66841 main.go:141] libmachine: (no-preload-397724) DBG | exit 0
	I0829 20:26:58.542594   66841 main.go:141] libmachine: (no-preload-397724) DBG | SSH cmd err, output: <nil>: 
	I0829 20:26:58.542925   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetConfigRaw
	I0829 20:26:58.543582   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetIP
	I0829 20:26:58.546057   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.546384   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.546422   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.546691   66841 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/config.json ...
	I0829 20:26:58.546871   66841 machine.go:93] provisionDockerMachine start ...
	I0829 20:26:58.546890   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:58.547113   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:58.549493   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.549816   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.549854   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.549972   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:58.550140   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.550260   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.550388   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:58.550581   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:58.550805   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:58.550822   66841 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 20:26:58.658784   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 20:26:58.658827   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:26:58.659063   66841 buildroot.go:166] provisioning hostname "no-preload-397724"
	I0829 20:26:58.659083   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:26:58.659220   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:58.661932   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.662294   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.662320   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.662485   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:58.662695   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.662880   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.663011   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:58.663168   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:58.663343   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:58.663356   66841 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-397724 && echo "no-preload-397724" | sudo tee /etc/hostname
	I0829 20:26:58.790591   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-397724
	
	I0829 20:26:58.790618   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:58.793294   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.793612   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.793639   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.793849   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:58.794035   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.794192   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:58.794289   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:58.794430   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:58.794656   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:58.794678   66841 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-397724' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-397724/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-397724' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 20:26:58.915925   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 20:26:58.915958   66841 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19530-11185/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-11185/.minikube}
	I0829 20:26:58.915981   66841 buildroot.go:174] setting up certificates
	I0829 20:26:58.915991   66841 provision.go:84] configureAuth start
	I0829 20:26:58.916000   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetMachineName
	I0829 20:26:58.916279   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetIP
	I0829 20:26:58.919034   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.919385   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.919415   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.919523   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:58.921483   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.921805   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:58.921831   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:58.922015   66841 provision.go:143] copyHostCerts
	I0829 20:26:58.922062   66841 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem, removing ...
	I0829 20:26:58.922079   66841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem
	I0829 20:26:58.922135   66841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/ca.pem (1078 bytes)
	I0829 20:26:58.922242   66841 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem, removing ...
	I0829 20:26:58.922256   66841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem
	I0829 20:26:58.922288   66841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/cert.pem (1123 bytes)
	I0829 20:26:58.922365   66841 exec_runner.go:144] found /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem, removing ...
	I0829 20:26:58.922375   66841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem
	I0829 20:26:58.922400   66841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-11185/.minikube/key.pem (1675 bytes)
	I0829 20:26:58.922491   66841 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem org=jenkins.no-preload-397724 san=[127.0.0.1 192.168.50.214 localhost minikube no-preload-397724]
	I0829 20:26:55.206462   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:57.207175   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:59.207454   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:59.264390   66841 provision.go:177] copyRemoteCerts
	I0829 20:26:59.264446   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 20:26:59.264467   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.267259   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.267603   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.267626   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.267794   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.268014   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.268190   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.268367   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:26:59.353746   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 20:26:59.378289   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0829 20:26:59.402330   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 20:26:59.425412   66841 provision.go:87] duration metric: took 509.408381ms to configureAuth
	I0829 20:26:59.425442   66841 buildroot.go:189] setting minikube options for container-runtime
	I0829 20:26:59.425616   66841 config.go:182] Loaded profile config "no-preload-397724": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:26:59.425679   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.428148   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.428503   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.428545   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.428698   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.428906   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.429077   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.429227   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.429365   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:59.429511   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:59.429524   66841 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 20:26:59.666382   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 20:26:59.666408   66841 machine.go:96] duration metric: took 1.11952301s to provisionDockerMachine
	I0829 20:26:59.666422   66841 start.go:293] postStartSetup for "no-preload-397724" (driver="kvm2")
	I0829 20:26:59.666436   66841 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 20:26:59.666458   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.666833   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 20:26:59.666881   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.669407   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.669725   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.669751   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.669888   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.670073   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.670214   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.670316   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:26:59.753440   66841 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 20:26:59.758408   66841 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 20:26:59.758431   66841 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/addons for local assets ...
	I0829 20:26:59.758509   66841 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-11185/.minikube/files for local assets ...
	I0829 20:26:59.758632   66841 filesync.go:149] local asset: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem -> 183612.pem in /etc/ssl/certs
	I0829 20:26:59.758753   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 20:26:59.768355   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:26:59.792742   66841 start.go:296] duration metric: took 126.308201ms for postStartSetup
	I0829 20:26:59.792782   66841 fix.go:56] duration metric: took 19.617155195s for fixHost
	I0829 20:26:59.792806   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.795380   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.795744   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.795781   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.795917   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.796124   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.796237   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.796376   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.796488   66841 main.go:141] libmachine: Using SSH client type: native
	I0829 20:26:59.796668   66841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0829 20:26:59.796680   66841 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 20:26:59.903539   66841 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724963219.868600963
	
	I0829 20:26:59.903564   66841 fix.go:216] guest clock: 1724963219.868600963
	I0829 20:26:59.903574   66841 fix.go:229] Guest: 2024-08-29 20:26:59.868600963 +0000 UTC Remote: 2024-08-29 20:26:59.792787483 +0000 UTC m=+355.719318860 (delta=75.81348ms)
	I0829 20:26:59.903623   66841 fix.go:200] guest clock delta is within tolerance: 75.81348ms
	I0829 20:26:59.903632   66841 start.go:83] releasing machines lock for "no-preload-397724", held for 19.728042303s
	I0829 20:26:59.903676   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.903967   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetIP
	I0829 20:26:59.906798   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.907183   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.907212   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.907378   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.907804   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.907970   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:26:59.908038   66841 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 20:26:59.908072   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.908324   66841 ssh_runner.go:195] Run: cat /version.json
	I0829 20:26:59.908346   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:26:59.910843   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.911025   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.911187   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.911215   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.911325   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.911415   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:26:59.911437   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:26:59.911485   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.911640   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:26:59.911649   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.911847   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:26:59.911848   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:26:59.911978   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:26:59.912119   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:27:00.023116   66841 ssh_runner.go:195] Run: systemctl --version
	I0829 20:27:00.029346   66841 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 20:27:00.169122   66841 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 20:27:00.176823   66841 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 20:27:00.176913   66841 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 20:27:00.194795   66841 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 20:27:00.194836   66841 start.go:495] detecting cgroup driver to use...
	I0829 20:27:00.194906   66841 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 20:27:00.212145   66841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 20:27:00.226584   66841 docker.go:217] disabling cri-docker service (if available) ...
	I0829 20:27:00.226656   66841 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 20:27:00.240525   66841 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 20:27:00.256847   66841 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 20:27:00.371938   66841 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 20:27:00.516891   66841 docker.go:233] disabling docker service ...
	I0829 20:27:00.516964   66841 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 20:27:00.531127   66841 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 20:27:00.543483   66841 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 20:27:00.672033   66841 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 20:27:00.794828   66841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 20:27:00.809204   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 20:27:00.828484   66841 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 20:27:00.828547   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.839273   66841 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 20:27:00.839344   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.850336   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.860980   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.871661   66841 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 20:27:00.884343   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.895190   66841 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.912700   66841 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 20:27:00.923383   66841 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 20:27:00.934168   66841 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 20:27:00.934231   66841 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 20:27:00.948181   66841 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 20:27:00.959121   66841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:27:01.072055   66841 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 20:27:01.163024   66841 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 20:27:01.163104   66841 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 20:27:01.167949   66841 start.go:563] Will wait 60s for crictl version
	I0829 20:27:01.168011   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.171707   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 20:27:01.212950   66841 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 20:27:01.213031   66841 ssh_runner.go:195] Run: crio --version
	I0829 20:27:01.242181   66841 ssh_runner.go:195] Run: crio --version
	I0829 20:27:01.276389   66841 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 20:26:57.441729   68084 pod_ready.go:93] pod "coredns-6f6b679f8f-5mkxp" in "kube-system" namespace has status "Ready":"True"
	I0829 20:26:57.441753   68084 pod_ready.go:82] duration metric: took 4.007206558s for pod "coredns-6f6b679f8f-5mkxp" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:57.441762   68084 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:26:59.448210   68084 pod_ready.go:103] pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:26:59.248692   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:26:59.748815   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:00.248257   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:00.748264   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:01.249241   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:01.748894   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:02.249045   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:02.748765   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:03.248902   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:03.748333   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:01.277829   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetIP
	I0829 20:27:01.280762   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:27:01.281144   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:27:01.281171   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:27:01.281367   66841 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0829 20:27:01.285714   66841 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:27:01.297903   66841 kubeadm.go:883] updating cluster {Name:no-preload-397724 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-397724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 20:27:01.298010   66841 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 20:27:01.298041   66841 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 20:27:01.331474   66841 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 20:27:01.331498   66841 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 20:27:01.331566   66841 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:01.331572   66841 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.331609   66841 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.331632   66841 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.331643   66841 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.331615   66841 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0829 20:27:01.331737   66841 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.331758   66841 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.333182   66841 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.333233   66841 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.333206   66841 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.333195   66841 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.333191   66841 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:01.333278   66841 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.333191   66841 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.333333   66841 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0829 20:27:01.507028   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.514096   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.526653   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.530292   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.531828   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.534432   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.550465   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0829 20:27:01.613161   66841 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0829 20:27:01.613209   66841 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.613287   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.631193   66841 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0829 20:27:01.631236   66841 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.631285   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.687868   66841 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0829 20:27:01.687911   66841 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.687967   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.700369   66841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:01.713036   66841 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0829 20:27:01.713102   66841 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.713159   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.722934   66841 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0829 20:27:01.722991   66841 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.723042   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.722941   66841 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0829 20:27:01.723130   66841 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.723159   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.785242   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.785246   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.785342   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.785391   66841 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0829 20:27:01.785438   66841 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:01.785450   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.785474   66841 ssh_runner.go:195] Run: which crictl
	I0829 20:27:01.785479   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.785534   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.925322   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:01.925371   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:01.925374   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:01.925474   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:01.925518   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:01.925569   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:01.925593   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:02.072628   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 20:27:02.072690   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 20:27:02.072744   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 20:27:02.072822   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 20:27:02.072867   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:02.176999   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0829 20:27:02.177031   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 20:27:02.177503   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 20:27:02.177507   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 20:27:02.177572   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0829 20:27:02.177581   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0829 20:27:02.177678   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0829 20:27:02.177682   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 20:27:02.185515   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0829 20:27:02.185585   66841 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:27:02.185624   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0829 20:27:02.259015   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0829 20:27:02.259076   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0829 20:27:02.259087   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0829 20:27:02.259106   66841 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 20:27:02.259113   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0829 20:27:02.259138   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0829 20:27:02.259147   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 20:27:02.259155   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 20:27:02.259152   66841 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0829 20:27:02.259139   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0829 20:27:02.259157   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 20:27:02.259240   66841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0829 20:27:01.208076   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:03.208339   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:01.954153   68084 pod_ready.go:103] pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:03.454991   68084 pod_ready.go:93] pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:03.455023   68084 pod_ready.go:82] duration metric: took 6.013253793s for pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:03.455036   68084 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:05.461938   68084 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:04.249082   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:04.748738   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:05.248398   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:05.749056   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:06.248693   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:06.748904   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:07.249145   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:07.749131   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:08.248774   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:08.748444   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:04.630344   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.371149915s)
	I0829 20:27:04.630373   66841 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0: (2.371188324s)
	I0829 20:27:04.630410   66841 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.371191825s)
	I0829 20:27:04.630432   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0829 20:27:04.630413   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0829 20:27:04.630379   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0829 20:27:04.630465   66841 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.371187188s)
	I0829 20:27:04.630478   66841 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 20:27:04.630481   66841 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0829 20:27:04.630561   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 20:27:06.684986   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.054398317s)
	I0829 20:27:06.685019   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0829 20:27:06.685047   66841 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0829 20:27:06.685098   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0829 20:27:05.707657   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:07.708034   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:06.965873   68084 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:06.965904   68084 pod_ready.go:82] duration metric: took 3.51085868s for pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.965918   68084 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.976464   68084 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:06.976489   68084 pod_ready.go:82] duration metric: took 10.562771ms for pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.976502   68084 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b4ffx" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.982178   68084 pod_ready.go:93] pod "kube-proxy-b4ffx" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:06.982197   68084 pod_ready.go:82] duration metric: took 5.687889ms for pod "kube-proxy-b4ffx" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.982205   68084 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.987316   68084 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:06.987333   68084 pod_ready.go:82] duration metric: took 5.122275ms for pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:06.987342   68084 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:08.994794   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:11.493940   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:09.248746   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:09.748722   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:10.249074   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:10.748647   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:11.248236   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:11.749057   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:12.249227   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:12.748688   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:13.249248   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:13.749298   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:10.365120   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.679993065s)
	I0829 20:27:10.365150   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0829 20:27:10.365182   66841 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0829 20:27:10.365256   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0829 20:27:12.122371   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.757087653s)
	I0829 20:27:12.122409   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0829 20:27:12.122434   66841 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 20:27:12.122564   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 20:27:13.575108   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.45251018s)
	I0829 20:27:13.575137   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0829 20:27:13.575165   66841 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 20:27:13.575210   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 20:27:09.708364   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:11.708491   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:14.207383   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:13.494124   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:15.993564   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:14.249254   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:14.748957   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:15.249229   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:15.749137   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:16.248967   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:16.748254   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:17.248929   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:17.748339   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:18.248666   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:18.748712   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:15.742286   66841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.16705417s)
	I0829 20:27:15.742320   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0829 20:27:15.742348   66841 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0829 20:27:15.742398   66841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0829 20:27:16.391977   66841 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19530-11185/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0829 20:27:16.392017   66841 cache_images.go:123] Successfully loaded all cached images
	I0829 20:27:16.392022   66841 cache_images.go:92] duration metric: took 15.060512795s to LoadCachedImages
	I0829 20:27:16.392034   66841 kubeadm.go:934] updating node { 192.168.50.214 8443 v1.31.0 crio true true} ...
	I0829 20:27:16.392139   66841 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-397724 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-397724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 20:27:16.392203   66841 ssh_runner.go:195] Run: crio config
	I0829 20:27:16.445382   66841 cni.go:84] Creating CNI manager for ""
	I0829 20:27:16.445406   66841 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:27:16.445420   66841 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 20:27:16.445448   66841 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.214 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-397724 NodeName:no-preload-397724 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 20:27:16.445612   66841 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-397724"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 20:27:16.445671   66841 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 20:27:16.456505   66841 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 20:27:16.456560   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 20:27:16.467361   66841 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0829 20:27:16.484700   66841 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 20:27:16.503026   66841 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0829 20:27:16.519867   66841 ssh_runner.go:195] Run: grep 192.168.50.214	control-plane.minikube.internal$ /etc/hosts
	I0829 20:27:16.523648   66841 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 20:27:16.535642   66841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:27:16.671027   66841 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:27:16.688692   66841 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724 for IP: 192.168.50.214
	I0829 20:27:16.688712   66841 certs.go:194] generating shared ca certs ...
	I0829 20:27:16.688727   66841 certs.go:226] acquiring lock for ca certs: {Name:mkb8471fcf7387f342d41e80fd751f9886e73f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:27:16.688883   66841 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key
	I0829 20:27:16.688944   66841 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key
	I0829 20:27:16.688957   66841 certs.go:256] generating profile certs ...
	I0829 20:27:16.689053   66841 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/client.key
	I0829 20:27:16.689132   66841 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/apiserver.key.1f535ae9
	I0829 20:27:16.689182   66841 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/proxy-client.key
	I0829 20:27:16.689360   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem (1338 bytes)
	W0829 20:27:16.689400   66841 certs.go:480] ignoring /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361_empty.pem, impossibly tiny 0 bytes
	I0829 20:27:16.689415   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 20:27:16.689450   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/ca.pem (1078 bytes)
	I0829 20:27:16.689504   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/cert.pem (1123 bytes)
	I0829 20:27:16.689540   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/certs/key.pem (1675 bytes)
	I0829 20:27:16.689596   66841 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem (1708 bytes)
	I0829 20:27:16.690277   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 20:27:16.747582   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 20:27:16.782064   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 20:27:16.816382   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 20:27:16.851548   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0829 20:27:16.882919   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 20:27:16.907439   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 20:27:16.932392   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 20:27:16.957451   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/certs/18361.pem --> /usr/share/ca-certificates/18361.pem (1338 bytes)
	I0829 20:27:16.982482   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/ssl/certs/183612.pem --> /usr/share/ca-certificates/183612.pem (1708 bytes)
	I0829 20:27:17.006032   66841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 20:27:17.030052   66841 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 20:27:17.047792   66841 ssh_runner.go:195] Run: openssl version
	I0829 20:27:17.053922   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183612.pem && ln -fs /usr/share/ca-certificates/183612.pem /etc/ssl/certs/183612.pem"
	I0829 20:27:17.065219   66841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183612.pem
	I0829 20:27:17.069592   66841 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 19:13 /usr/share/ca-certificates/183612.pem
	I0829 20:27:17.069647   66841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183612.pem
	I0829 20:27:17.075853   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183612.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 20:27:17.086727   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 20:27:17.097935   66841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:27:17.102198   66841 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:27:17.102252   66841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 20:27:17.108031   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 20:27:17.119868   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18361.pem && ln -fs /usr/share/ca-certificates/18361.pem /etc/ssl/certs/18361.pem"
	I0829 20:27:17.131513   66841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18361.pem
	I0829 20:27:17.136434   66841 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 19:13 /usr/share/ca-certificates/18361.pem
	I0829 20:27:17.136497   66841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18361.pem
	I0829 20:27:17.142219   66841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18361.pem /etc/ssl/certs/51391683.0"
	I0829 20:27:17.153448   66841 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 20:27:17.158375   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 20:27:17.165156   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 20:27:17.170927   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 20:27:17.176669   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 20:27:17.182293   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 20:27:17.187936   66841 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 20:27:17.193572   66841 kubeadm.go:392] StartCluster: {Name:no-preload-397724 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-397724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 20:27:17.193682   66841 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 20:27:17.193754   66841 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:27:17.238327   66841 cri.go:89] found id: ""
	I0829 20:27:17.238392   66841 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 20:27:17.248923   66841 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 20:27:17.248943   66841 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 20:27:17.248984   66841 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 20:27:17.263143   66841 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 20:27:17.264260   66841 kubeconfig.go:125] found "no-preload-397724" server: "https://192.168.50.214:8443"
	I0829 20:27:17.266448   66841 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 20:27:17.276347   66841 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.214
	I0829 20:27:17.276378   66841 kubeadm.go:1160] stopping kube-system containers ...
	I0829 20:27:17.276389   66841 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 20:27:17.276440   66841 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 20:27:17.311409   66841 cri.go:89] found id: ""
	I0829 20:27:17.311476   66841 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 20:27:17.329204   66841 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:27:17.339063   66841 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:27:17.339079   66841 kubeadm.go:157] found existing configuration files:
	
	I0829 20:27:17.339118   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:27:17.348268   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:27:17.348324   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:27:17.357596   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:27:17.366504   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:27:17.366575   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:27:17.376068   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:27:17.385156   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:27:17.385220   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:27:17.394890   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:27:17.404213   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:27:17.404283   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:27:17.413669   66841 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:27:17.423307   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:17.536003   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:17.990605   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:18.217809   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:18.297100   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:18.421185   66841 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:27:18.421283   66841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:18.922043   66841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:16.209618   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:18.707544   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:17.993609   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:19.994469   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:19.248924   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:19.748958   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:20.248851   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:20.748547   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:21.248298   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:21.748802   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:22.248680   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:22.748271   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:23.248491   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:23.748803   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:19.422030   66841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:19.442023   66841 api_server.go:72] duration metric: took 1.020839747s to wait for apiserver process to appear ...
	I0829 20:27:19.442047   66841 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:27:19.442070   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:22.444156   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:27:22.444192   66841 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:27:22.444211   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:22.466228   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 20:27:22.466258   66841 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 20:27:22.942835   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:22.949338   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:27:22.949360   66841 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:27:23.443069   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:23.447845   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 20:27:23.447876   66841 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 20:27:23.942372   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:27:23.946517   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 200:
	ok
	I0829 20:27:23.953497   66841 api_server.go:141] control plane version: v1.31.0
	I0829 20:27:23.953522   66841 api_server.go:131] duration metric: took 4.511467637s to wait for apiserver health ...
	I0829 20:27:23.953530   66841 cni.go:84] Creating CNI manager for ""
	I0829 20:27:23.953536   66841 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:27:23.955180   66841 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:27:23.956396   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:27:23.969429   66841 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:27:24.000989   66841 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:27:24.014200   66841 system_pods.go:59] 8 kube-system pods found
	I0829 20:27:24.014233   66841 system_pods.go:61] "coredns-6f6b679f8f-g7xxs" [f0148527-2146-4153-aa20-5ac97b664027] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:27:24.014240   66841 system_pods.go:61] "etcd-no-preload-397724" [f04b5ee4-f439-470a-b298-1a9ed569db70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 20:27:24.014248   66841 system_pods.go:61] "kube-apiserver-no-preload-397724" [2328f327-1744-4785-9266-3f992b977ef8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 20:27:24.014254   66841 system_pods.go:61] "kube-controller-manager-no-preload-397724" [0e63f04d-8627-45e9-ac80-70a0fe63f5db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 20:27:24.014260   66841 system_pods.go:61] "kube-proxy-57kbt" [9f85ce17-85a0-4a52-bdaf-4e3aee4d1a98] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 20:27:24.014267   66841 system_pods.go:61] "kube-scheduler-no-preload-397724" [106821c6-2444-470a-bac1-78838c0b1982] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 20:27:24.014273   66841 system_pods.go:61] "metrics-server-6867b74b74-668dg" [e3f3ab24-7777-40b0-a54c-00a294e7e68e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:27:24.014280   66841 system_pods.go:61] "storage-provisioner" [146bd02a-8f50-4d19-a188-4adc2bcc0a43] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 20:27:24.014288   66841 system_pods.go:74] duration metric: took 13.275941ms to wait for pod list to return data ...
	I0829 20:27:24.014298   66841 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:27:24.018932   66841 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:27:24.018956   66841 node_conditions.go:123] node cpu capacity is 2
	I0829 20:27:24.018966   66841 node_conditions.go:105] duration metric: took 4.661993ms to run NodePressure ...
	I0829 20:27:24.018981   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 20:27:21.207144   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:23.208728   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:22.493988   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:24.494152   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:24.248456   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:24.748347   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:25.248337   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:25.748905   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:26.248912   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:26.749302   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:27.249058   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:27.749105   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:28.248548   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:28.748298   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:24.305237   66841 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 20:27:24.310640   66841 kubeadm.go:739] kubelet initialised
	I0829 20:27:24.310666   66841 kubeadm.go:740] duration metric: took 5.402212ms waiting for restarted kubelet to initialise ...
	I0829 20:27:24.310679   66841 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:27:24.316568   66841 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:26.325035   66841 pod_ready.go:103] pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:28.336627   66841 pod_ready.go:103] pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:25.706496   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:27.708228   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:26.992949   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:28.993682   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:30.993877   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:29.248994   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:29.749020   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:30.248983   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:30.748247   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:31.249052   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:31.249133   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:31.293442   67607 cri.go:89] found id: ""
	I0829 20:27:31.293466   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.293473   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:31.293479   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:31.293527   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:31.333976   67607 cri.go:89] found id: ""
	I0829 20:27:31.333999   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.334006   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:31.334011   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:31.334055   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:31.373680   67607 cri.go:89] found id: ""
	I0829 20:27:31.373707   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.373715   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:31.373720   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:31.373766   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:31.407798   67607 cri.go:89] found id: ""
	I0829 20:27:31.407824   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.407832   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:31.407837   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:31.407893   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:31.444409   67607 cri.go:89] found id: ""
	I0829 20:27:31.444437   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.444445   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:31.444451   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:31.444512   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:31.479313   67607 cri.go:89] found id: ""
	I0829 20:27:31.479333   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.479341   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:31.479347   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:31.479403   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:31.516056   67607 cri.go:89] found id: ""
	I0829 20:27:31.516089   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.516100   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:31.516108   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:31.516168   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:31.555324   67607 cri.go:89] found id: ""
	I0829 20:27:31.555349   67607 logs.go:276] 0 containers: []
	W0829 20:27:31.555357   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:31.555365   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:31.555375   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:31.626397   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:31.626434   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:31.672006   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:31.672038   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:31.724691   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:31.724727   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:31.740283   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:31.740324   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:31.874007   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:29.824509   66841 pod_ready.go:93] pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:29.824530   66841 pod_ready.go:82] duration metric: took 5.507939145s for pod "coredns-6f6b679f8f-g7xxs" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:29.824547   66841 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:31.833646   66841 pod_ready.go:103] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:30.207213   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:32.706352   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:32.993932   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:35.494511   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:34.374203   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:34.387817   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:34.387888   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:34.423254   67607 cri.go:89] found id: ""
	I0829 20:27:34.423279   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.423286   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:34.423296   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:34.423343   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:34.457741   67607 cri.go:89] found id: ""
	I0829 20:27:34.457768   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.457775   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:34.457781   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:34.457827   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:34.498432   67607 cri.go:89] found id: ""
	I0829 20:27:34.498457   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.498464   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:34.498469   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:34.498523   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:34.534290   67607 cri.go:89] found id: ""
	I0829 20:27:34.534317   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.534324   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:34.534330   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:34.534380   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:34.570878   67607 cri.go:89] found id: ""
	I0829 20:27:34.570909   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.570919   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:34.570928   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:34.570986   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:34.615735   67607 cri.go:89] found id: ""
	I0829 20:27:34.615762   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.615769   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:34.615775   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:34.615824   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:34.656667   67607 cri.go:89] found id: ""
	I0829 20:27:34.656706   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.656721   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:34.656730   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:34.656779   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:34.708906   67607 cri.go:89] found id: ""
	I0829 20:27:34.708928   67607 logs.go:276] 0 containers: []
	W0829 20:27:34.708937   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:34.708947   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:34.708962   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:34.767382   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:34.767417   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:34.786523   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:34.786574   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:34.872832   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:34.872857   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:34.872871   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:34.954581   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:34.954620   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:37.497810   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:37.511479   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:37.511539   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:37.547930   67607 cri.go:89] found id: ""
	I0829 20:27:37.547962   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.547972   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:37.547980   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:37.548035   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:37.585281   67607 cri.go:89] found id: ""
	I0829 20:27:37.585304   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.585312   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:37.585318   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:37.585365   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:37.622201   67607 cri.go:89] found id: ""
	I0829 20:27:37.622229   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.622241   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:37.622246   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:37.622295   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:37.657248   67607 cri.go:89] found id: ""
	I0829 20:27:37.657274   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.657281   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:37.657289   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:37.657335   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:37.691674   67607 cri.go:89] found id: ""
	I0829 20:27:37.691703   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.691711   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:37.691716   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:37.691764   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:37.729523   67607 cri.go:89] found id: ""
	I0829 20:27:37.729548   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.729557   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:37.729562   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:37.729609   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:37.764601   67607 cri.go:89] found id: ""
	I0829 20:27:37.764629   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.764637   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:37.764643   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:37.764705   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:37.799228   67607 cri.go:89] found id: ""
	I0829 20:27:37.799259   67607 logs.go:276] 0 containers: []
	W0829 20:27:37.799270   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:37.799281   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:37.799301   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:37.848128   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:37.848158   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:37.862610   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:37.862640   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:37.936859   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:37.936888   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:37.936903   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:38.013647   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:38.013681   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:34.331889   66841 pod_ready.go:103] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:36.332334   66841 pod_ready.go:103] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:37.329545   66841 pod_ready.go:93] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.329566   66841 pod_ready.go:82] duration metric: took 7.50501178s for pod "etcd-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.329576   66841 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.333442   66841 pod_ready.go:93] pod "kube-apiserver-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.333458   66841 pod_ready.go:82] duration metric: took 3.876755ms for pod "kube-apiserver-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.333467   66841 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.336952   66841 pod_ready.go:93] pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.336968   66841 pod_ready.go:82] duration metric: took 3.49531ms for pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.336976   66841 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-57kbt" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.340368   66841 pod_ready.go:93] pod "kube-proxy-57kbt" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.340383   66841 pod_ready.go:82] duration metric: took 3.401844ms for pod "kube-proxy-57kbt" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.340396   66841 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.344111   66841 pod_ready.go:93] pod "kube-scheduler-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:27:37.344125   66841 pod_ready.go:82] duration metric: took 3.723924ms for pod "kube-scheduler-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:37.344132   66841 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace to be "Ready" ...
	I0829 20:27:34.708682   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:37.206876   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:37.997827   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:40.494840   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:40.551395   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:40.568100   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:40.568181   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:40.616582   67607 cri.go:89] found id: ""
	I0829 20:27:40.616611   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.616623   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:40.616631   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:40.616695   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:40.690580   67607 cri.go:89] found id: ""
	I0829 20:27:40.690620   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.690631   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:40.690638   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:40.690695   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:40.733624   67607 cri.go:89] found id: ""
	I0829 20:27:40.733653   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.733662   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:40.733670   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:40.733733   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:40.767499   67607 cri.go:89] found id: ""
	I0829 20:27:40.767528   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.767538   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:40.767546   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:40.767619   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:40.806973   67607 cri.go:89] found id: ""
	I0829 20:27:40.807002   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.807009   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:40.807015   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:40.807079   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:40.842311   67607 cri.go:89] found id: ""
	I0829 20:27:40.842334   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.842341   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:40.842347   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:40.842401   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:40.880208   67607 cri.go:89] found id: ""
	I0829 20:27:40.880238   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.880248   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:40.880255   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:40.880309   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:40.918395   67607 cri.go:89] found id: ""
	I0829 20:27:40.918424   67607 logs.go:276] 0 containers: []
	W0829 20:27:40.918435   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:40.918445   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:40.918459   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:40.972396   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:40.972437   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:40.986136   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:40.986169   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:41.064600   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:41.064623   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:41.064634   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:41.146653   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:41.146687   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:43.687773   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:43.701576   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:43.701645   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:43.737259   67607 cri.go:89] found id: ""
	I0829 20:27:43.737282   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.737289   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:43.737299   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:43.737346   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:43.772678   67607 cri.go:89] found id: ""
	I0829 20:27:43.772702   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.772709   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:43.772714   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:43.772776   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:43.806788   67607 cri.go:89] found id: ""
	I0829 20:27:43.806821   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.806831   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:43.806839   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:43.806900   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:39.350484   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:41.352279   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:43.850564   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:39.707977   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:42.207630   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:42.993571   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:44.994696   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:43.841738   67607 cri.go:89] found id: ""
	I0829 20:27:43.841759   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.841767   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:43.841772   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:43.841829   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:43.878420   67607 cri.go:89] found id: ""
	I0829 20:27:43.878449   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.878459   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:43.878466   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:43.878527   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:43.914307   67607 cri.go:89] found id: ""
	I0829 20:27:43.914335   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.914345   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:43.914352   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:43.914413   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:43.958827   67607 cri.go:89] found id: ""
	I0829 20:27:43.958853   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.958865   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:43.958871   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:43.958935   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:43.997397   67607 cri.go:89] found id: ""
	I0829 20:27:43.997423   67607 logs.go:276] 0 containers: []
	W0829 20:27:43.997432   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:43.997442   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:43.997455   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:44.049245   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:44.049280   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:44.063473   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:44.063511   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:44.131628   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:44.131651   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:44.131666   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:44.210826   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:44.210854   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:46.754905   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:46.769531   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:46.769588   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:46.805245   67607 cri.go:89] found id: ""
	I0829 20:27:46.805272   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.805280   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:46.805285   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:46.805338   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:46.843606   67607 cri.go:89] found id: ""
	I0829 20:27:46.843637   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.843646   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:46.843654   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:46.843710   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:46.880300   67607 cri.go:89] found id: ""
	I0829 20:27:46.880326   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.880333   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:46.880338   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:46.880387   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:46.923537   67607 cri.go:89] found id: ""
	I0829 20:27:46.923562   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.923569   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:46.923574   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:46.923620   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:46.957774   67607 cri.go:89] found id: ""
	I0829 20:27:46.957806   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.957817   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:46.957826   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:46.957887   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:46.996972   67607 cri.go:89] found id: ""
	I0829 20:27:46.996995   67607 logs.go:276] 0 containers: []
	W0829 20:27:46.997005   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:46.997013   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:46.997056   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:47.030560   67607 cri.go:89] found id: ""
	I0829 20:27:47.030588   67607 logs.go:276] 0 containers: []
	W0829 20:27:47.030606   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:47.030612   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:47.030665   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:47.068654   67607 cri.go:89] found id: ""
	I0829 20:27:47.068678   67607 logs.go:276] 0 containers: []
	W0829 20:27:47.068686   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:47.068694   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:47.068706   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:47.082335   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:47.082367   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:47.162792   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:47.162817   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:47.162829   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:47.241456   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:47.241491   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:47.282249   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:47.282274   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:45.850673   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:47.850836   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:44.707198   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:46.707222   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:49.207556   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:46.995302   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:49.498812   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:49.836268   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:49.850415   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:49.850491   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:49.887816   67607 cri.go:89] found id: ""
	I0829 20:27:49.887843   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.887851   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:49.887856   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:49.887916   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:49.923701   67607 cri.go:89] found id: ""
	I0829 20:27:49.923735   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.923745   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:49.923755   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:49.923818   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:49.958197   67607 cri.go:89] found id: ""
	I0829 20:27:49.958225   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.958236   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:49.958244   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:49.958313   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:49.995333   67607 cri.go:89] found id: ""
	I0829 20:27:49.995361   67607 logs.go:276] 0 containers: []
	W0829 20:27:49.995373   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:49.995380   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:49.995439   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:50.034345   67607 cri.go:89] found id: ""
	I0829 20:27:50.034375   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.034382   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:50.034387   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:50.034438   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:50.070324   67607 cri.go:89] found id: ""
	I0829 20:27:50.070355   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.070365   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:50.070374   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:50.070434   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:50.107301   67607 cri.go:89] found id: ""
	I0829 20:27:50.107326   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.107334   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:50.107340   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:50.107400   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:50.144748   67607 cri.go:89] found id: ""
	I0829 20:27:50.144778   67607 logs.go:276] 0 containers: []
	W0829 20:27:50.144788   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:50.144800   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:50.144816   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:50.183576   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:50.183606   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:50.236716   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:50.236750   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:50.251589   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:50.251612   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:50.317816   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:50.317840   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:50.317855   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:52.894572   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:52.908081   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:52.908149   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:52.945272   67607 cri.go:89] found id: ""
	I0829 20:27:52.945299   67607 logs.go:276] 0 containers: []
	W0829 20:27:52.945309   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:52.945317   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:52.945377   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:52.980237   67607 cri.go:89] found id: ""
	I0829 20:27:52.980262   67607 logs.go:276] 0 containers: []
	W0829 20:27:52.980270   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:52.980275   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:52.980325   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:53.017894   67607 cri.go:89] found id: ""
	I0829 20:27:53.017922   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.017929   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:53.017935   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:53.017991   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:53.052577   67607 cri.go:89] found id: ""
	I0829 20:27:53.052603   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.052611   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:53.052616   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:53.052667   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:53.093414   67607 cri.go:89] found id: ""
	I0829 20:27:53.093444   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.093455   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:53.093462   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:53.093523   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:53.130794   67607 cri.go:89] found id: ""
	I0829 20:27:53.130825   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.130837   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:53.130845   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:53.130902   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:53.163793   67607 cri.go:89] found id: ""
	I0829 20:27:53.163819   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.163827   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:53.163832   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:53.163882   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:53.204824   67607 cri.go:89] found id: ""
	I0829 20:27:53.204852   67607 logs.go:276] 0 containers: []
	W0829 20:27:53.204862   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:53.204872   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:53.204885   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:53.243411   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:53.243440   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:53.296611   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:53.296642   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:53.310909   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:53.310943   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:53.385768   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:53.385790   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:53.385801   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:49.851712   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:52.350295   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:51.711115   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:54.207340   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:51.993943   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:53.996334   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:56.494226   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:55.966801   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:55.980852   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:55.980933   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:56.017682   67607 cri.go:89] found id: ""
	I0829 20:27:56.017707   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.017716   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:56.017722   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:56.017767   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:56.051556   67607 cri.go:89] found id: ""
	I0829 20:27:56.051584   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.051594   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:56.051600   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:56.051665   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:56.095301   67607 cri.go:89] found id: ""
	I0829 20:27:56.095330   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.095340   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:56.095348   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:56.095408   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:56.131161   67607 cri.go:89] found id: ""
	I0829 20:27:56.131195   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.131205   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:56.131213   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:56.131269   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:56.166611   67607 cri.go:89] found id: ""
	I0829 20:27:56.166637   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.166645   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:56.166651   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:56.166713   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:56.202818   67607 cri.go:89] found id: ""
	I0829 20:27:56.202846   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.202856   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:56.202864   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:56.202923   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:56.237855   67607 cri.go:89] found id: ""
	I0829 20:27:56.237883   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.237891   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:56.237897   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:56.237955   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:56.272402   67607 cri.go:89] found id: ""
	I0829 20:27:56.272426   67607 logs.go:276] 0 containers: []
	W0829 20:27:56.272433   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:56.272441   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:56.272452   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:56.351628   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:56.351653   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:56.389525   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:56.389559   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:56.444952   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:56.444989   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:56.459731   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:56.459759   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:56.536888   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:27:54.350358   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:56.350727   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:58.352884   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:56.208050   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:58.706897   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:58.993153   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:00.993544   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:27:59.037744   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:27:59.051868   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:27:59.051938   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:27:59.087436   67607 cri.go:89] found id: ""
	I0829 20:27:59.087461   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.087467   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:27:59.087474   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:27:59.087531   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:27:59.123729   67607 cri.go:89] found id: ""
	I0829 20:27:59.123757   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.123765   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:27:59.123771   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:27:59.123825   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:27:59.168649   67607 cri.go:89] found id: ""
	I0829 20:27:59.168682   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.168690   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:27:59.168696   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:27:59.168753   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:27:59.209770   67607 cri.go:89] found id: ""
	I0829 20:27:59.209791   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.209803   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:27:59.209808   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:27:59.209854   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:27:59.248358   67607 cri.go:89] found id: ""
	I0829 20:27:59.248384   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.248392   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:27:59.248398   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:27:59.248445   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:27:59.281770   67607 cri.go:89] found id: ""
	I0829 20:27:59.281797   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.281805   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:27:59.281811   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:27:59.281870   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:27:59.317255   67607 cri.go:89] found id: ""
	I0829 20:27:59.317285   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.317295   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:27:59.317302   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:27:59.317363   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:27:59.354301   67607 cri.go:89] found id: ""
	I0829 20:27:59.354324   67607 logs.go:276] 0 containers: []
	W0829 20:27:59.354332   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:27:59.354339   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:27:59.354352   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:27:59.438346   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:27:59.438382   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:27:59.482482   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:27:59.482513   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:27:59.540926   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:27:59.540961   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:27:59.555221   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:27:59.555258   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:27:59.622114   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:02.123276   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:02.137435   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:02.137502   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:02.176310   67607 cri.go:89] found id: ""
	I0829 20:28:02.176340   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.176347   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:02.176355   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:02.176414   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:02.216511   67607 cri.go:89] found id: ""
	I0829 20:28:02.216555   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.216562   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:02.216574   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:02.216625   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:02.260116   67607 cri.go:89] found id: ""
	I0829 20:28:02.260149   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.260158   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:02.260164   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:02.260225   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:02.301550   67607 cri.go:89] found id: ""
	I0829 20:28:02.301584   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.301600   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:02.301608   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:02.301692   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:02.335916   67607 cri.go:89] found id: ""
	I0829 20:28:02.335948   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.335959   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:02.335967   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:02.336033   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:02.372479   67607 cri.go:89] found id: ""
	I0829 20:28:02.372507   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.372515   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:02.372522   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:02.372584   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:02.406683   67607 cri.go:89] found id: ""
	I0829 20:28:02.406713   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.406721   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:02.406727   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:02.406774   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:02.443130   67607 cri.go:89] found id: ""
	I0829 20:28:02.443156   67607 logs.go:276] 0 containers: []
	W0829 20:28:02.443164   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:02.443173   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:02.443185   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:02.485747   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:02.485777   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:02.540106   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:02.540143   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:02.556158   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:02.556188   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:02.637870   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:02.637900   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:02.637915   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:00.851416   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:03.351248   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:00.707716   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:02.708204   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:02.994108   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:04.994988   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:05.220330   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:05.233932   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:05.233994   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:05.269046   67607 cri.go:89] found id: ""
	I0829 20:28:05.269072   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.269081   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:05.269087   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:05.269134   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:05.303963   67607 cri.go:89] found id: ""
	I0829 20:28:05.303989   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.303999   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:05.304006   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:05.304065   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:05.340943   67607 cri.go:89] found id: ""
	I0829 20:28:05.340975   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.340985   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:05.340992   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:05.341061   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:05.379551   67607 cri.go:89] found id: ""
	I0829 20:28:05.379582   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.379593   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:05.379601   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:05.379659   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:05.414229   67607 cri.go:89] found id: ""
	I0829 20:28:05.414256   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.414267   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:05.414274   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:05.414339   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:05.450212   67607 cri.go:89] found id: ""
	I0829 20:28:05.450241   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.450251   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:05.450258   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:05.450318   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:05.487415   67607 cri.go:89] found id: ""
	I0829 20:28:05.487451   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.487463   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:05.487470   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:05.487529   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:05.521347   67607 cri.go:89] found id: ""
	I0829 20:28:05.521370   67607 logs.go:276] 0 containers: []
	W0829 20:28:05.521383   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:05.521390   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:05.521402   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:05.572317   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:05.572350   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:05.585651   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:05.585680   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:05.653929   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:05.653950   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:05.653969   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:05.732843   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:05.732873   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:08.281983   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:08.295104   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:08.295166   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:08.328570   67607 cri.go:89] found id: ""
	I0829 20:28:08.328596   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.328605   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:08.328613   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:08.328684   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:08.363567   67607 cri.go:89] found id: ""
	I0829 20:28:08.363595   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.363605   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:08.363613   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:08.363672   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:08.399619   67607 cri.go:89] found id: ""
	I0829 20:28:08.399645   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.399653   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:08.399659   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:08.399707   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:08.439252   67607 cri.go:89] found id: ""
	I0829 20:28:08.439283   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.439294   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:08.439301   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:08.439357   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:08.477730   67607 cri.go:89] found id: ""
	I0829 20:28:08.477754   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.477762   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:08.477768   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:08.477834   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:08.522045   67607 cri.go:89] found id: ""
	I0829 20:28:08.522066   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.522073   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:08.522079   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:08.522137   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:08.560400   67607 cri.go:89] found id: ""
	I0829 20:28:08.560427   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.560434   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:08.560441   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:08.560504   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:08.599111   67607 cri.go:89] found id: ""
	I0829 20:28:08.599140   67607 logs.go:276] 0 containers: []
	W0829 20:28:08.599150   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:08.599161   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:08.599175   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:08.681451   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:08.681487   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:08.722800   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:08.722835   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:08.779058   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:08.779089   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:08.796940   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:08.796963   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 20:28:05.852245   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:08.351402   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:04.708669   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:07.207124   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:07.493431   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:09.493794   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	W0829 20:28:08.868296   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:11.369316   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:11.384150   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:11.384225   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:11.418452   67607 cri.go:89] found id: ""
	I0829 20:28:11.418480   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.418488   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:11.418494   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:11.418555   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:11.451359   67607 cri.go:89] found id: ""
	I0829 20:28:11.451389   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.451400   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:11.451408   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:11.451481   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:11.488408   67607 cri.go:89] found id: ""
	I0829 20:28:11.488436   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.488446   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:11.488453   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:11.488510   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:11.528311   67607 cri.go:89] found id: ""
	I0829 20:28:11.528340   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.528351   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:11.528359   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:11.528412   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:11.571345   67607 cri.go:89] found id: ""
	I0829 20:28:11.571372   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.571382   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:11.571389   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:11.571454   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:11.606812   67607 cri.go:89] found id: ""
	I0829 20:28:11.606839   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.606850   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:11.606857   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:11.606918   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:11.652687   67607 cri.go:89] found id: ""
	I0829 20:28:11.652710   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.652717   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:11.652722   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:11.652781   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:11.687583   67607 cri.go:89] found id: ""
	I0829 20:28:11.687628   67607 logs.go:276] 0 containers: []
	W0829 20:28:11.687645   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:11.687655   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:11.687673   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:11.727052   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:11.727086   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:11.779116   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:11.779155   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:11.792911   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:11.792949   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:11.868415   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:11.868443   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:11.868461   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:10.850225   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:13.351638   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:09.707347   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:11.709556   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:14.206996   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:11.994187   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:14.494457   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:14.447886   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:14.462144   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:14.462221   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:14.499160   67607 cri.go:89] found id: ""
	I0829 20:28:14.499185   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.499193   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:14.499200   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:14.499258   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:14.545736   67607 cri.go:89] found id: ""
	I0829 20:28:14.545764   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.545774   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:14.545780   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:14.545844   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:14.583626   67607 cri.go:89] found id: ""
	I0829 20:28:14.583664   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.583674   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:14.583682   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:14.583744   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:14.619876   67607 cri.go:89] found id: ""
	I0829 20:28:14.619909   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.619917   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:14.619923   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:14.619975   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:14.655750   67607 cri.go:89] found id: ""
	I0829 20:28:14.655778   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.655786   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:14.655791   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:14.655848   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:14.690759   67607 cri.go:89] found id: ""
	I0829 20:28:14.690785   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.690795   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:14.690800   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:14.690850   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:14.727238   67607 cri.go:89] found id: ""
	I0829 20:28:14.727269   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.727282   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:14.727289   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:14.727344   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:14.765962   67607 cri.go:89] found id: ""
	I0829 20:28:14.765996   67607 logs.go:276] 0 containers: []
	W0829 20:28:14.766006   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:14.766017   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:14.766033   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:14.835749   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:14.835779   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:14.835797   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:14.914075   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:14.914112   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:14.952684   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:14.952712   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:15.004598   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:15.004635   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:17.518949   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:17.532175   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:17.532250   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:17.569943   67607 cri.go:89] found id: ""
	I0829 20:28:17.569971   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.569979   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:17.569985   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:17.570044   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:17.605472   67607 cri.go:89] found id: ""
	I0829 20:28:17.605502   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.605510   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:17.605515   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:17.605566   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:17.641568   67607 cri.go:89] found id: ""
	I0829 20:28:17.641593   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.641603   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:17.641610   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:17.641669   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:17.680870   67607 cri.go:89] found id: ""
	I0829 20:28:17.680895   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.680905   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:17.680916   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:17.680981   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:17.723546   67607 cri.go:89] found id: ""
	I0829 20:28:17.723576   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.723587   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:17.723594   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:17.723659   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:17.757934   67607 cri.go:89] found id: ""
	I0829 20:28:17.757962   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.757973   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:17.757980   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:17.758028   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:17.792641   67607 cri.go:89] found id: ""
	I0829 20:28:17.792670   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.792679   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:17.792685   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:17.792738   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:17.830776   67607 cri.go:89] found id: ""
	I0829 20:28:17.830800   67607 logs.go:276] 0 containers: []
	W0829 20:28:17.830807   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:17.830815   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:17.830825   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:17.886331   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:17.886377   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:17.900111   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:17.900135   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:17.969538   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:17.969563   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:17.969577   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:18.050609   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:18.050649   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:15.850497   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:17.851663   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:16.707415   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:19.207313   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:16.994325   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:19.494247   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:20.590686   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:20.605066   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:20.605121   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:20.646028   67607 cri.go:89] found id: ""
	I0829 20:28:20.646058   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.646074   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:20.646082   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:20.646143   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:20.683433   67607 cri.go:89] found id: ""
	I0829 20:28:20.683469   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.683479   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:20.683487   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:20.683567   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:20.722737   67607 cri.go:89] found id: ""
	I0829 20:28:20.722765   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.722775   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:20.722782   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:20.722841   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:20.759777   67607 cri.go:89] found id: ""
	I0829 20:28:20.759800   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.759807   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:20.759812   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:20.759864   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:20.799142   67607 cri.go:89] found id: ""
	I0829 20:28:20.799164   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.799170   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:20.799176   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:20.799223   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:20.838331   67607 cri.go:89] found id: ""
	I0829 20:28:20.838357   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.838365   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:20.838371   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:20.838427   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:20.878066   67607 cri.go:89] found id: ""
	I0829 20:28:20.878099   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.878110   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:20.878117   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:20.878175   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:20.928940   67607 cri.go:89] found id: ""
	I0829 20:28:20.928966   67607 logs.go:276] 0 containers: []
	W0829 20:28:20.928975   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:20.928982   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:20.928993   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:20.984435   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:20.984471   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:21.005860   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:21.005900   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:21.084092   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:21.084123   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:21.084138   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:21.165971   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:21.166009   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:23.705033   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:23.718332   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:23.718390   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:23.753594   67607 cri.go:89] found id: ""
	I0829 20:28:23.753625   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.753635   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:23.753650   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:23.753715   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:23.791840   67607 cri.go:89] found id: ""
	I0829 20:28:23.791864   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.791872   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:23.791878   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:23.791930   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:20.350028   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:22.350487   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:21.207839   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:23.707197   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:21.993965   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:23.994879   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:26.493735   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:23.837815   67607 cri.go:89] found id: ""
	I0829 20:28:23.837839   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.837846   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:23.837851   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:23.837908   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:23.873155   67607 cri.go:89] found id: ""
	I0829 20:28:23.873184   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.873194   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:23.873201   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:23.873265   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:23.908728   67607 cri.go:89] found id: ""
	I0829 20:28:23.908757   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.908768   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:23.908774   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:23.908834   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:23.946286   67607 cri.go:89] found id: ""
	I0829 20:28:23.946310   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.946320   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:23.946328   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:23.946392   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:23.983078   67607 cri.go:89] found id: ""
	I0829 20:28:23.983105   67607 logs.go:276] 0 containers: []
	W0829 20:28:23.983115   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:23.983129   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:23.983190   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:24.020601   67607 cri.go:89] found id: ""
	I0829 20:28:24.020634   67607 logs.go:276] 0 containers: []
	W0829 20:28:24.020644   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:24.020654   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:24.020669   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:24.034438   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:24.034463   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:24.103209   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:24.103230   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:24.103243   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:24.182977   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:24.183016   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:24.224743   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:24.224834   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:26.781507   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:26.794301   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:26.794387   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:26.827218   67607 cri.go:89] found id: ""
	I0829 20:28:26.827243   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.827250   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:26.827257   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:26.827303   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:26.862643   67607 cri.go:89] found id: ""
	I0829 20:28:26.862673   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.862685   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:26.862693   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:26.862743   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:26.898127   67607 cri.go:89] found id: ""
	I0829 20:28:26.898159   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.898169   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:26.898177   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:26.898237   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:26.932119   67607 cri.go:89] found id: ""
	I0829 20:28:26.932146   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.932167   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:26.932174   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:26.932241   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:26.966380   67607 cri.go:89] found id: ""
	I0829 20:28:26.966413   67607 logs.go:276] 0 containers: []
	W0829 20:28:26.966421   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:26.966427   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:26.966478   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:27.004350   67607 cri.go:89] found id: ""
	I0829 20:28:27.004372   67607 logs.go:276] 0 containers: []
	W0829 20:28:27.004379   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:27.004386   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:27.004436   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:27.041171   67607 cri.go:89] found id: ""
	I0829 20:28:27.041199   67607 logs.go:276] 0 containers: []
	W0829 20:28:27.041206   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:27.041212   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:27.041257   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:27.073993   67607 cri.go:89] found id: ""
	I0829 20:28:27.074031   67607 logs.go:276] 0 containers: []
	W0829 20:28:27.074041   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:27.074053   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:27.074066   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:27.148169   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:27.148199   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:27.148214   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:27.227174   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:27.227212   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:27.267180   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:27.267230   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:27.319034   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:27.319066   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:24.350754   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:26.850582   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:26.207974   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:28.707820   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:28.494090   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:30.994157   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:29.833497   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:29.846883   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:29.846951   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:29.884133   67607 cri.go:89] found id: ""
	I0829 20:28:29.884163   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.884175   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:29.884182   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:29.884247   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:29.917594   67607 cri.go:89] found id: ""
	I0829 20:28:29.917618   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.917628   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:29.917636   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:29.917696   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:29.952537   67607 cri.go:89] found id: ""
	I0829 20:28:29.952568   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.952576   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:29.952582   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:29.952630   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:29.988410   67607 cri.go:89] found id: ""
	I0829 20:28:29.988441   67607 logs.go:276] 0 containers: []
	W0829 20:28:29.988448   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:29.988454   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:29.988511   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:30.026761   67607 cri.go:89] found id: ""
	I0829 20:28:30.026788   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.026796   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:30.026802   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:30.026861   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:30.063010   67607 cri.go:89] found id: ""
	I0829 20:28:30.063037   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.063046   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:30.063054   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:30.063109   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:30.098067   67607 cri.go:89] found id: ""
	I0829 20:28:30.098093   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.098101   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:30.098107   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:30.098161   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:30.132887   67607 cri.go:89] found id: ""
	I0829 20:28:30.132914   67607 logs.go:276] 0 containers: []
	W0829 20:28:30.132921   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:30.132928   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:30.132940   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:30.184955   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:30.184990   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:30.198966   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:30.199004   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:30.268950   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:30.268977   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:30.268991   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:30.354222   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:30.354260   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:32.896554   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:32.911188   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:32.911271   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:32.945726   67607 cri.go:89] found id: ""
	I0829 20:28:32.945750   67607 logs.go:276] 0 containers: []
	W0829 20:28:32.945758   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:32.945773   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:32.945829   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:32.980234   67607 cri.go:89] found id: ""
	I0829 20:28:32.980267   67607 logs.go:276] 0 containers: []
	W0829 20:28:32.980275   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:32.980281   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:32.980329   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:33.019031   67607 cri.go:89] found id: ""
	I0829 20:28:33.019063   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.019071   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:33.019076   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:33.019126   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:33.056290   67607 cri.go:89] found id: ""
	I0829 20:28:33.056314   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.056322   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:33.056327   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:33.056391   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:33.090038   67607 cri.go:89] found id: ""
	I0829 20:28:33.090068   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.090078   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:33.090086   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:33.090152   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:33.125742   67607 cri.go:89] found id: ""
	I0829 20:28:33.125774   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.125782   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:33.125787   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:33.125849   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:33.159019   67607 cri.go:89] found id: ""
	I0829 20:28:33.159047   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.159058   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:33.159065   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:33.159125   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:33.197900   67607 cri.go:89] found id: ""
	I0829 20:28:33.197925   67607 logs.go:276] 0 containers: []
	W0829 20:28:33.197933   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:33.197941   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:33.197955   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:33.250010   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:33.250040   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:33.263348   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:33.263374   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:33.342037   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:33.342065   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:33.342082   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:33.423324   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:33.423361   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:29.350275   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:31.350994   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:33.850866   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:30.713472   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:33.207271   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:32.995169   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:35.493980   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:35.963734   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:35.978648   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:35.978713   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:36.015326   67607 cri.go:89] found id: ""
	I0829 20:28:36.015350   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.015358   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:36.015364   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:36.015411   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:36.050840   67607 cri.go:89] found id: ""
	I0829 20:28:36.050869   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.050879   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:36.050886   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:36.050947   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:36.084048   67607 cri.go:89] found id: ""
	I0829 20:28:36.084076   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.084084   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:36.084090   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:36.084138   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:36.118655   67607 cri.go:89] found id: ""
	I0829 20:28:36.118682   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.118693   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:36.118702   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:36.118762   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:36.153879   67607 cri.go:89] found id: ""
	I0829 20:28:36.153908   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.153918   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:36.153926   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:36.153988   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:36.199834   67607 cri.go:89] found id: ""
	I0829 20:28:36.199858   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.199866   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:36.199872   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:36.199927   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:36.238098   67607 cri.go:89] found id: ""
	I0829 20:28:36.238129   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.238139   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:36.238146   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:36.238208   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:36.272091   67607 cri.go:89] found id: ""
	I0829 20:28:36.272124   67607 logs.go:276] 0 containers: []
	W0829 20:28:36.272135   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:36.272146   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:36.272162   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:36.338478   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:36.338498   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:36.338510   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:36.418637   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:36.418671   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:36.458167   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:36.458194   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:36.508592   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:36.508630   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:36.351066   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:38.849684   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:35.706813   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:37.708058   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:38.003178   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:40.493065   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:39.022668   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:39.035897   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:39.035971   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:39.071155   67607 cri.go:89] found id: ""
	I0829 20:28:39.071185   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.071196   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:39.071203   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:39.071258   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:39.104135   67607 cri.go:89] found id: ""
	I0829 20:28:39.104177   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.104188   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:39.104206   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:39.104266   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:39.138301   67607 cri.go:89] found id: ""
	I0829 20:28:39.138329   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.138339   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:39.138346   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:39.138404   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:39.172674   67607 cri.go:89] found id: ""
	I0829 20:28:39.172700   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.172708   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:39.172719   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:39.172779   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:39.209810   67607 cri.go:89] found id: ""
	I0829 20:28:39.209836   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.209845   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:39.209852   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:39.209915   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:39.248692   67607 cri.go:89] found id: ""
	I0829 20:28:39.248715   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.248722   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:39.248728   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:39.248798   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:39.284303   67607 cri.go:89] found id: ""
	I0829 20:28:39.284333   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.284343   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:39.284351   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:39.284401   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:39.321346   67607 cri.go:89] found id: ""
	I0829 20:28:39.321375   67607 logs.go:276] 0 containers: []
	W0829 20:28:39.321386   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:39.321396   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:39.321410   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:39.334678   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:39.334710   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:39.421992   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:39.422014   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:39.422027   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:39.503250   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:39.503280   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:39.540623   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:39.540654   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:42.092131   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:42.105440   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:42.105498   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:42.140994   67607 cri.go:89] found id: ""
	I0829 20:28:42.141024   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.141034   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:42.141042   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:42.141102   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:42.175182   67607 cri.go:89] found id: ""
	I0829 20:28:42.175217   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.175228   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:42.175248   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:42.175319   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:42.209251   67607 cri.go:89] found id: ""
	I0829 20:28:42.209281   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.209291   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:42.209299   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:42.209362   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:42.247944   67607 cri.go:89] found id: ""
	I0829 20:28:42.247970   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.247977   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:42.247983   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:42.248028   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:42.285613   67607 cri.go:89] found id: ""
	I0829 20:28:42.285644   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.285651   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:42.285657   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:42.285722   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:42.319826   67607 cri.go:89] found id: ""
	I0829 20:28:42.319851   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.319858   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:42.319864   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:42.319928   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:42.357150   67607 cri.go:89] found id: ""
	I0829 20:28:42.357173   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.357182   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:42.357189   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:42.357243   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:42.392150   67607 cri.go:89] found id: ""
	I0829 20:28:42.392170   67607 logs.go:276] 0 containers: []
	W0829 20:28:42.392178   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:42.392185   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:42.392197   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:42.469240   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:42.469271   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:42.469286   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:42.549165   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:42.549198   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:42.591900   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:42.591930   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:42.642593   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:42.642625   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:40.851544   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:43.350420   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:39.708341   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:42.206888   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:44.207934   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:42.494791   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:44.992992   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:45.157092   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:45.170832   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:45.170916   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:45.207210   67607 cri.go:89] found id: ""
	I0829 20:28:45.207235   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.207244   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:45.207251   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:45.207308   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:45.245321   67607 cri.go:89] found id: ""
	I0829 20:28:45.245352   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.245362   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:45.245379   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:45.245448   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:45.280326   67607 cri.go:89] found id: ""
	I0829 20:28:45.280369   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.280381   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:45.280389   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:45.280451   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:45.318294   67607 cri.go:89] found id: ""
	I0829 20:28:45.318322   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.318333   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:45.318340   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:45.318411   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:45.352903   67607 cri.go:89] found id: ""
	I0829 20:28:45.352925   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.352932   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:45.352938   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:45.352990   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:45.389251   67607 cri.go:89] found id: ""
	I0829 20:28:45.389273   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.389280   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:45.389286   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:45.389340   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:45.424348   67607 cri.go:89] found id: ""
	I0829 20:28:45.424385   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.424397   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:45.424404   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:45.424453   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:45.459058   67607 cri.go:89] found id: ""
	I0829 20:28:45.459087   67607 logs.go:276] 0 containers: []
	W0829 20:28:45.459098   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:45.459109   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:45.459124   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:45.510386   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:45.510423   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:45.524896   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:45.524923   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:45.593987   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:45.594064   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:45.594082   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:45.668738   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:45.668771   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:48.206497   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:48.219625   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:48.219696   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:48.254936   67607 cri.go:89] found id: ""
	I0829 20:28:48.254959   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.254966   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:48.254971   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:48.255018   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:48.290826   67607 cri.go:89] found id: ""
	I0829 20:28:48.290851   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.290859   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:48.290864   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:48.290910   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:48.327508   67607 cri.go:89] found id: ""
	I0829 20:28:48.327533   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.327540   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:48.327546   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:48.327593   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:48.364492   67607 cri.go:89] found id: ""
	I0829 20:28:48.364517   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.364525   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:48.364530   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:48.364580   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:48.400035   67607 cri.go:89] found id: ""
	I0829 20:28:48.400062   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.400072   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:48.400079   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:48.400144   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:48.433999   67607 cri.go:89] found id: ""
	I0829 20:28:48.434026   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.434035   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:48.434043   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:48.434104   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:48.468841   67607 cri.go:89] found id: ""
	I0829 20:28:48.468873   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.468889   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:48.468903   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:48.468971   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:48.506557   67607 cri.go:89] found id: ""
	I0829 20:28:48.506589   67607 logs.go:276] 0 containers: []
	W0829 20:28:48.506598   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:48.506609   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:48.506624   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:48.577023   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:48.577044   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:48.577056   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:48.654372   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:48.654407   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:48.691125   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:48.691152   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:48.746383   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:48.746414   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:45.350581   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:47.351437   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:46.705575   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:48.707018   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:46.993532   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:48.994284   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:51.494177   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:51.260591   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:51.273911   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:51.273974   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:51.311517   67607 cri.go:89] found id: ""
	I0829 20:28:51.311545   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.311553   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:51.311567   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:51.311616   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:51.348220   67607 cri.go:89] found id: ""
	I0829 20:28:51.348247   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.348256   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:51.348264   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:51.348321   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:51.383560   67607 cri.go:89] found id: ""
	I0829 20:28:51.383599   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.383611   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:51.383619   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:51.383680   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:51.419241   67607 cri.go:89] found id: ""
	I0829 20:28:51.419268   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.419278   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:51.419286   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:51.419343   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:51.453954   67607 cri.go:89] found id: ""
	I0829 20:28:51.453979   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.453986   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:51.453992   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:51.454047   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:51.489457   67607 cri.go:89] found id: ""
	I0829 20:28:51.489480   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.489488   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:51.489493   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:51.489544   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:51.524072   67607 cri.go:89] found id: ""
	I0829 20:28:51.524100   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.524107   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:51.524113   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:51.524160   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:51.561238   67607 cri.go:89] found id: ""
	I0829 20:28:51.561263   67607 logs.go:276] 0 containers: []
	W0829 20:28:51.561271   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:51.561279   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:51.561290   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:51.615422   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:51.615462   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:51.632180   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:51.632216   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:51.704335   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:51.704363   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:51.704378   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:51.794219   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:51.794260   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:49.852140   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:52.351142   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:51.205903   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:53.207651   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:53.495412   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:55.993489   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:54.342556   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:54.356325   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:54.356400   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:54.390928   67607 cri.go:89] found id: ""
	I0829 20:28:54.390952   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.390959   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:54.390965   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:54.391011   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:54.426970   67607 cri.go:89] found id: ""
	I0829 20:28:54.427002   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.427013   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:54.427020   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:54.427074   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:54.464121   67607 cri.go:89] found id: ""
	I0829 20:28:54.464155   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.464166   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:54.464174   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:54.464236   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:54.499790   67607 cri.go:89] found id: ""
	I0829 20:28:54.499816   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.499827   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:54.499840   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:54.499889   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:54.537212   67607 cri.go:89] found id: ""
	I0829 20:28:54.537239   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.537249   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:54.537256   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:54.537314   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:54.575370   67607 cri.go:89] found id: ""
	I0829 20:28:54.575399   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.575410   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:54.575417   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:54.575469   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:54.608403   67607 cri.go:89] found id: ""
	I0829 20:28:54.608432   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.608443   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:54.608453   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:54.608514   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:54.645259   67607 cri.go:89] found id: ""
	I0829 20:28:54.645285   67607 logs.go:276] 0 containers: []
	W0829 20:28:54.645292   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:54.645300   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:54.645311   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:54.697022   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:54.697063   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:54.712873   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:54.712914   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:54.814253   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:54.814278   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:54.814295   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:54.896473   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:54.896507   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:57.441648   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:28:57.455245   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:28:57.455321   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:28:57.495365   67607 cri.go:89] found id: ""
	I0829 20:28:57.495397   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.495405   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:28:57.495411   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:28:57.495472   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:28:57.529555   67607 cri.go:89] found id: ""
	I0829 20:28:57.529582   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.529590   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:28:57.529597   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:28:57.529667   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:28:57.564168   67607 cri.go:89] found id: ""
	I0829 20:28:57.564196   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.564208   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:28:57.564215   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:28:57.564277   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:28:57.602057   67607 cri.go:89] found id: ""
	I0829 20:28:57.602089   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.602100   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:28:57.602108   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:28:57.602194   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:28:57.638195   67607 cri.go:89] found id: ""
	I0829 20:28:57.638226   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.638235   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:28:57.638244   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:28:57.638307   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:57.674556   67607 cri.go:89] found id: ""
	I0829 20:28:57.674605   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.674615   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:28:57.674623   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:28:57.674680   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:28:57.709256   67607 cri.go:89] found id: ""
	I0829 20:28:57.709282   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.709291   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:28:57.709298   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:28:57.709358   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:28:57.743629   67607 cri.go:89] found id: ""
	I0829 20:28:57.743652   67607 logs.go:276] 0 containers: []
	W0829 20:28:57.743659   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:28:57.743668   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:28:57.743679   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:28:57.789067   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:28:57.789098   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:28:57.843372   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:28:57.843403   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:28:57.858630   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:28:57.858661   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:28:57.927776   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:28:57.927798   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:28:57.927814   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:28:54.850906   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:56.851300   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:55.208638   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:57.707756   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:28:57.994287   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:00.493343   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:00.508180   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:00.521451   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:00.521529   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:00.557912   67607 cri.go:89] found id: ""
	I0829 20:29:00.557938   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.557945   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:00.557951   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:00.557997   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:00.595186   67607 cri.go:89] found id: ""
	I0829 20:29:00.595215   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.595226   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:00.595237   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:00.595299   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:00.631553   67607 cri.go:89] found id: ""
	I0829 20:29:00.631581   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.631592   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:00.631600   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:00.631660   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:00.666502   67607 cri.go:89] found id: ""
	I0829 20:29:00.666525   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.666551   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:00.666560   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:00.666621   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:00.700797   67607 cri.go:89] found id: ""
	I0829 20:29:00.700824   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.700835   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:00.700842   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:00.700908   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:00.739957   67607 cri.go:89] found id: ""
	I0829 20:29:00.739976   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.739989   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:00.739994   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:00.740035   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:00.800704   67607 cri.go:89] found id: ""
	I0829 20:29:00.800740   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.800750   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:00.800757   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:00.800820   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:00.837678   67607 cri.go:89] found id: ""
	I0829 20:29:00.837704   67607 logs.go:276] 0 containers: []
	W0829 20:29:00.837712   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:00.837720   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:00.837731   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:00.888359   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:00.888391   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:00.903074   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:00.903103   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:00.964865   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:00.964885   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:00.964898   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:01.049351   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:01.049387   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:03.589829   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:03.603120   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:03.603192   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:03.637647   67607 cri.go:89] found id: ""
	I0829 20:29:03.637672   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.637678   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:03.637684   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:03.637732   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:03.673807   67607 cri.go:89] found id: ""
	I0829 20:29:03.673842   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.673852   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:03.673860   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:03.673918   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:03.709490   67607 cri.go:89] found id: ""
	I0829 20:29:03.709516   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.709527   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:03.709533   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:03.709595   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:03.751662   67607 cri.go:89] found id: ""
	I0829 20:29:03.751688   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.751696   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:03.751702   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:03.751751   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:03.787861   67607 cri.go:89] found id: ""
	I0829 20:29:03.787896   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.787908   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:03.787917   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:03.787977   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:28:59.350888   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:01.850615   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:03.851438   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:00.207912   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:02.707309   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:02.493506   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:04.494305   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:03.824383   67607 cri.go:89] found id: ""
	I0829 20:29:03.824413   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.824431   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:03.824438   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:03.824499   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:03.863904   67607 cri.go:89] found id: ""
	I0829 20:29:03.863929   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.863937   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:03.863943   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:03.863990   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:03.902336   67607 cri.go:89] found id: ""
	I0829 20:29:03.902360   67607 logs.go:276] 0 containers: []
	W0829 20:29:03.902368   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:03.902375   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:03.902386   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:03.951468   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:03.951499   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:03.965789   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:03.965816   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:04.035096   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:04.035119   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:04.035193   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:04.115842   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:04.115876   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:06.662652   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:06.676508   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:06.676583   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:06.713058   67607 cri.go:89] found id: ""
	I0829 20:29:06.713084   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.713093   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:06.713101   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:06.713171   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:06.747513   67607 cri.go:89] found id: ""
	I0829 20:29:06.747544   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.747552   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:06.747557   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:06.747617   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:06.782662   67607 cri.go:89] found id: ""
	I0829 20:29:06.782689   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.782695   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:06.782701   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:06.782758   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:06.818472   67607 cri.go:89] found id: ""
	I0829 20:29:06.818500   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.818510   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:06.818516   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:06.818586   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:06.852928   67607 cri.go:89] found id: ""
	I0829 20:29:06.852954   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.852964   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:06.852974   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:06.853032   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:06.893859   67607 cri.go:89] found id: ""
	I0829 20:29:06.893889   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.893899   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:06.893907   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:06.893969   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:06.931552   67607 cri.go:89] found id: ""
	I0829 20:29:06.931584   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.931594   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:06.931601   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:06.931662   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:06.967210   67607 cri.go:89] found id: ""
	I0829 20:29:06.967243   67607 logs.go:276] 0 containers: []
	W0829 20:29:06.967254   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:06.967266   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:06.967279   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:07.020595   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:07.020631   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:07.034738   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:07.034764   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:07.103726   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:07.103747   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:07.103760   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:07.184727   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:07.184764   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:06.350610   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:08.351571   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:05.207055   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:07.207650   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:06.994653   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:09.493932   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:09.746639   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:09.761228   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:09.761308   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:09.802071   67607 cri.go:89] found id: ""
	I0829 20:29:09.802102   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.802113   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:09.802122   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:09.802180   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:09.837352   67607 cri.go:89] found id: ""
	I0829 20:29:09.837385   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.837395   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:09.837402   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:09.837464   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:09.874951   67607 cri.go:89] found id: ""
	I0829 20:29:09.874980   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.874992   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:09.874999   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:09.875055   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:09.909660   67607 cri.go:89] found id: ""
	I0829 20:29:09.909696   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.909706   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:09.909713   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:09.909777   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:09.949727   67607 cri.go:89] found id: ""
	I0829 20:29:09.949751   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.949759   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:09.949765   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:09.949825   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:09.984576   67607 cri.go:89] found id: ""
	I0829 20:29:09.984609   67607 logs.go:276] 0 containers: []
	W0829 20:29:09.984617   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:09.984623   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:09.984675   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:10.022499   67607 cri.go:89] found id: ""
	I0829 20:29:10.022523   67607 logs.go:276] 0 containers: []
	W0829 20:29:10.022530   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:10.022553   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:10.022624   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:10.064308   67607 cri.go:89] found id: ""
	I0829 20:29:10.064346   67607 logs.go:276] 0 containers: []
	W0829 20:29:10.064356   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:10.064367   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:10.064382   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:10.113505   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:10.113537   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:10.127614   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:10.127640   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:10.200558   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:10.200579   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:10.200592   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:10.292984   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:10.293020   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:12.833100   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:12.846645   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:12.846712   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:12.885396   67607 cri.go:89] found id: ""
	I0829 20:29:12.885423   67607 logs.go:276] 0 containers: []
	W0829 20:29:12.885430   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:12.885436   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:12.885486   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:12.922556   67607 cri.go:89] found id: ""
	I0829 20:29:12.922584   67607 logs.go:276] 0 containers: []
	W0829 20:29:12.922595   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:12.922602   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:12.922688   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:12.965294   67607 cri.go:89] found id: ""
	I0829 20:29:12.965324   67607 logs.go:276] 0 containers: []
	W0829 20:29:12.965335   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:12.965342   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:12.965401   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:13.022911   67607 cri.go:89] found id: ""
	I0829 20:29:13.022934   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.022942   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:13.022948   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:13.023009   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:13.077009   67607 cri.go:89] found id: ""
	I0829 20:29:13.077035   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.077043   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:13.077048   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:13.077095   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:13.114202   67607 cri.go:89] found id: ""
	I0829 20:29:13.114233   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.114243   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:13.114251   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:13.114315   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:13.147025   67607 cri.go:89] found id: ""
	I0829 20:29:13.147049   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.147057   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:13.147063   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:13.147110   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:13.183112   67607 cri.go:89] found id: ""
	I0829 20:29:13.183138   67607 logs.go:276] 0 containers: []
	W0829 20:29:13.183148   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:13.183159   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:13.183173   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:13.240558   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:13.240595   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:13.255563   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:13.255589   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:13.322826   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:13.322846   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:13.322857   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:13.399330   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:13.399365   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:10.850650   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:12.852188   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:09.706791   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:11.707397   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:13.708663   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:11.993311   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:13.994310   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:16.494854   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:15.938467   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:15.951742   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:15.951812   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:15.987492   67607 cri.go:89] found id: ""
	I0829 20:29:15.987517   67607 logs.go:276] 0 containers: []
	W0829 20:29:15.987524   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:15.987530   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:15.987575   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:16.024187   67607 cri.go:89] found id: ""
	I0829 20:29:16.024214   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.024223   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:16.024231   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:16.024291   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:16.058141   67607 cri.go:89] found id: ""
	I0829 20:29:16.058164   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.058171   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:16.058176   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:16.058225   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:16.092390   67607 cri.go:89] found id: ""
	I0829 20:29:16.092414   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.092421   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:16.092427   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:16.092472   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:16.130178   67607 cri.go:89] found id: ""
	I0829 20:29:16.130209   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.130219   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:16.130227   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:16.130289   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:16.163867   67607 cri.go:89] found id: ""
	I0829 20:29:16.163900   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.163907   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:16.163913   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:16.163964   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:16.197764   67607 cri.go:89] found id: ""
	I0829 20:29:16.197792   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.197798   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:16.197804   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:16.197850   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:16.233357   67607 cri.go:89] found id: ""
	I0829 20:29:16.233383   67607 logs.go:276] 0 containers: []
	W0829 20:29:16.233393   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:16.233403   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:16.233418   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:16.285154   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:16.285188   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:16.299057   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:16.299085   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:16.377021   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:16.377041   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:16.377062   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:16.457750   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:16.457796   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:15.350415   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:17.850927   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:16.206841   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:18.207273   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:18.993478   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:21.493806   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:18.999133   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:19.016143   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:19.016223   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:19.049225   67607 cri.go:89] found id: ""
	I0829 20:29:19.049252   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.049259   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:19.049265   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:19.049317   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:19.085237   67607 cri.go:89] found id: ""
	I0829 20:29:19.085297   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.085314   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:19.085325   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:19.085389   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:19.123476   67607 cri.go:89] found id: ""
	I0829 20:29:19.123501   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.123509   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:19.123514   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:19.123571   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:19.159958   67607 cri.go:89] found id: ""
	I0829 20:29:19.159984   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.159993   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:19.160001   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:19.160055   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:19.192385   67607 cri.go:89] found id: ""
	I0829 20:29:19.192410   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.192418   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:19.192423   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:19.192483   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:19.230781   67607 cri.go:89] found id: ""
	I0829 20:29:19.230804   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.230811   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:19.230816   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:19.230868   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:19.264925   67607 cri.go:89] found id: ""
	I0829 20:29:19.264954   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.264964   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:19.264972   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:19.265032   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:19.302461   67607 cri.go:89] found id: ""
	I0829 20:29:19.302484   67607 logs.go:276] 0 containers: []
	W0829 20:29:19.302491   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:19.302499   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:19.302510   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:19.384799   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:19.384833   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:19.425281   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:19.425313   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:19.477380   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:19.477412   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:19.492315   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:19.492350   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:19.563428   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:22.064407   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:22.078609   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:22.078670   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:22.112630   67607 cri.go:89] found id: ""
	I0829 20:29:22.112662   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.112672   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:22.112680   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:22.112741   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:22.149078   67607 cri.go:89] found id: ""
	I0829 20:29:22.149108   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.149117   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:22.149124   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:22.149186   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:22.184568   67607 cri.go:89] found id: ""
	I0829 20:29:22.184596   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.184605   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:22.184613   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:22.184682   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:22.220881   67607 cri.go:89] found id: ""
	I0829 20:29:22.220908   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.220919   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:22.220926   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:22.220987   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:22.256280   67607 cri.go:89] found id: ""
	I0829 20:29:22.256305   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.256314   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:22.256321   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:22.256386   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:22.294546   67607 cri.go:89] found id: ""
	I0829 20:29:22.294580   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.294590   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:22.294597   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:22.294660   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:22.332178   67607 cri.go:89] found id: ""
	I0829 20:29:22.332207   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.332215   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:22.332220   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:22.332266   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:22.368283   67607 cri.go:89] found id: ""
	I0829 20:29:22.368309   67607 logs.go:276] 0 containers: []
	W0829 20:29:22.368317   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:22.368325   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:22.368336   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:22.421800   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:22.421836   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:22.435539   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:22.435565   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:22.504402   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:22.504427   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:22.504441   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:22.588293   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:22.588326   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:19.851801   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:22.351929   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:20.207342   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:22.707546   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:23.493994   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:25.993337   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:25.130766   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:25.144479   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:25.144554   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:25.181606   67607 cri.go:89] found id: ""
	I0829 20:29:25.181636   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.181643   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:25.181649   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:25.181697   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:25.220291   67607 cri.go:89] found id: ""
	I0829 20:29:25.220320   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.220328   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:25.220335   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:25.220447   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:25.260947   67607 cri.go:89] found id: ""
	I0829 20:29:25.260975   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.260983   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:25.260988   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:25.261035   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:25.298200   67607 cri.go:89] found id: ""
	I0829 20:29:25.298232   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.298243   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:25.298256   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:25.298314   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:25.333128   67607 cri.go:89] found id: ""
	I0829 20:29:25.333162   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.333174   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:25.333181   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:25.333232   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:25.368951   67607 cri.go:89] found id: ""
	I0829 20:29:25.368979   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.368989   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:25.368997   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:25.369052   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:25.403687   67607 cri.go:89] found id: ""
	I0829 20:29:25.403715   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.403726   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:25.403734   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:25.403799   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:25.442338   67607 cri.go:89] found id: ""
	I0829 20:29:25.442365   67607 logs.go:276] 0 containers: []
	W0829 20:29:25.442372   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:25.442381   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:25.442395   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:25.456313   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:25.456335   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:25.528709   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:25.528730   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:25.528744   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:25.609976   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:25.610011   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:25.650044   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:25.650071   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:28.202683   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:28.216971   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:28.217046   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:28.256297   67607 cri.go:89] found id: ""
	I0829 20:29:28.256321   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.256329   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:28.256335   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:28.256379   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:28.289396   67607 cri.go:89] found id: ""
	I0829 20:29:28.289420   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.289427   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:28.289433   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:28.289484   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:28.323589   67607 cri.go:89] found id: ""
	I0829 20:29:28.323616   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.323623   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:28.323630   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:28.323676   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:28.362423   67607 cri.go:89] found id: ""
	I0829 20:29:28.362453   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.362463   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:28.362471   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:28.362531   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:28.396967   67607 cri.go:89] found id: ""
	I0829 20:29:28.396990   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.396998   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:28.397003   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:28.397053   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:28.430714   67607 cri.go:89] found id: ""
	I0829 20:29:28.430744   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.430755   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:28.430762   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:28.430831   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:28.468668   67607 cri.go:89] found id: ""
	I0829 20:29:28.468696   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.468707   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:28.468714   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:28.468777   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:28.506678   67607 cri.go:89] found id: ""
	I0829 20:29:28.506705   67607 logs.go:276] 0 containers: []
	W0829 20:29:28.506716   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:28.506727   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:28.506741   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:28.545259   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:28.545287   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:28.598249   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:28.598285   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:28.612385   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:28.612429   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:28.685765   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:28.685792   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:28.685806   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:24.851688   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:27.350456   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:24.708523   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:27.206094   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:29.207859   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:27.995492   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:30.494340   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:31.270074   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:31.284357   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:31.284417   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:31.319530   67607 cri.go:89] found id: ""
	I0829 20:29:31.319558   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.319566   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:31.319571   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:31.319640   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:31.356826   67607 cri.go:89] found id: ""
	I0829 20:29:31.356856   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.356867   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:31.356880   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:31.356934   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:31.390137   67607 cri.go:89] found id: ""
	I0829 20:29:31.390160   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.390167   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:31.390173   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:31.390219   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:31.424939   67607 cri.go:89] found id: ""
	I0829 20:29:31.424972   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.424989   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:31.424997   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:31.425054   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:31.460896   67607 cri.go:89] found id: ""
	I0829 20:29:31.460921   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.460928   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:31.460935   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:31.460985   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:31.498933   67607 cri.go:89] found id: ""
	I0829 20:29:31.498957   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.498967   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:31.498975   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:31.499044   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:31.534953   67607 cri.go:89] found id: ""
	I0829 20:29:31.534985   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.534996   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:31.535003   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:31.535065   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:31.576248   67607 cri.go:89] found id: ""
	I0829 20:29:31.576273   67607 logs.go:276] 0 containers: []
	W0829 20:29:31.576281   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:31.576291   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:31.576307   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:31.628157   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:31.628196   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:31.641564   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:31.641591   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:31.719949   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:31.719973   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:31.719996   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:31.795682   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:31.795716   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:29.351248   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:31.351424   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:33.851397   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:31.707552   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:34.207468   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:32.993432   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:34.993634   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:34.333468   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:34.347294   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:34.347370   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:34.384885   67607 cri.go:89] found id: ""
	I0829 20:29:34.384910   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.384921   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:34.384928   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:34.384991   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:34.422309   67607 cri.go:89] found id: ""
	I0829 20:29:34.422341   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.422351   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:34.422358   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:34.422417   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:34.459800   67607 cri.go:89] found id: ""
	I0829 20:29:34.459826   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.459834   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:34.459840   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:34.459905   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:34.495600   67607 cri.go:89] found id: ""
	I0829 20:29:34.495624   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.495633   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:34.495647   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:34.495708   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:34.531749   67607 cri.go:89] found id: ""
	I0829 20:29:34.531777   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.531788   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:34.531795   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:34.531856   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:34.571057   67607 cri.go:89] found id: ""
	I0829 20:29:34.571088   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.571098   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:34.571105   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:34.571168   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:34.609645   67607 cri.go:89] found id: ""
	I0829 20:29:34.609676   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.609687   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:34.609695   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:34.609753   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:34.647199   67607 cri.go:89] found id: ""
	I0829 20:29:34.647233   67607 logs.go:276] 0 containers: []
	W0829 20:29:34.647244   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:34.647255   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:34.647269   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:34.661390   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:34.661420   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:34.737590   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:34.737613   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:34.737625   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:34.820682   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:34.820721   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:34.861697   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:34.861723   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:37.412384   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:37.426081   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:37.426162   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:37.461302   67607 cri.go:89] found id: ""
	I0829 20:29:37.461332   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.461342   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:37.461349   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:37.461416   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:37.500869   67607 cri.go:89] found id: ""
	I0829 20:29:37.500898   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.500908   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:37.500915   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:37.500970   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:37.536908   67607 cri.go:89] found id: ""
	I0829 20:29:37.536932   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.536942   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:37.536949   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:37.537010   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:37.571939   67607 cri.go:89] found id: ""
	I0829 20:29:37.571969   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.571979   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:37.571987   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:37.572048   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:37.607834   67607 cri.go:89] found id: ""
	I0829 20:29:37.607864   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.607883   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:37.607891   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:37.607952   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:37.643932   67607 cri.go:89] found id: ""
	I0829 20:29:37.643963   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.643971   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:37.643978   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:37.644037   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:37.678148   67607 cri.go:89] found id: ""
	I0829 20:29:37.678177   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.678188   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:37.678195   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:37.678257   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:37.713170   67607 cri.go:89] found id: ""
	I0829 20:29:37.713195   67607 logs.go:276] 0 containers: []
	W0829 20:29:37.713209   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:37.713219   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:37.713233   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:37.752538   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:37.752567   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:37.802888   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:37.802923   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:37.816546   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:37.816585   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:37.891647   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:37.891667   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:37.891680   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:35.851668   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:38.351371   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:36.208220   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:38.708523   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:36.994441   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:39.493291   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:40.472354   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:40.486186   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:40.486252   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:40.520935   67607 cri.go:89] found id: ""
	I0829 20:29:40.520963   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.520971   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:40.520977   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:40.521037   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:40.561399   67607 cri.go:89] found id: ""
	I0829 20:29:40.561428   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.561440   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:40.561447   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:40.561514   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:40.601821   67607 cri.go:89] found id: ""
	I0829 20:29:40.601846   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.601855   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:40.601862   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:40.601918   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:40.636429   67607 cri.go:89] found id: ""
	I0829 20:29:40.636454   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.636462   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:40.636468   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:40.636525   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:40.670781   67607 cri.go:89] found id: ""
	I0829 20:29:40.670816   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.670828   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:40.670836   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:40.670912   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:40.706635   67607 cri.go:89] found id: ""
	I0829 20:29:40.706663   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.706674   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:40.706682   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:40.706739   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:40.741657   67607 cri.go:89] found id: ""
	I0829 20:29:40.741687   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.741695   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:40.741707   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:40.741770   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:40.777028   67607 cri.go:89] found id: ""
	I0829 20:29:40.777057   67607 logs.go:276] 0 containers: []
	W0829 20:29:40.777066   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:40.777077   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:40.777093   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:40.829387   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:40.829424   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:40.843928   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:40.843956   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:40.917965   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:40.917992   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:40.918008   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:41.001880   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:41.001925   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:43.549007   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:43.563446   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:43.563502   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:43.598503   67607 cri.go:89] found id: ""
	I0829 20:29:43.598548   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.598557   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:43.598564   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:43.598614   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:43.634169   67607 cri.go:89] found id: ""
	I0829 20:29:43.634200   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.634210   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:43.634218   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:43.634280   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:43.670467   67607 cri.go:89] found id: ""
	I0829 20:29:43.670492   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.670500   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:43.670506   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:43.670580   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:43.706812   67607 cri.go:89] found id: ""
	I0829 20:29:43.706839   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.706849   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:43.706857   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:43.706922   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:43.741577   67607 cri.go:89] found id: ""
	I0829 20:29:43.741606   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.741612   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:43.741620   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:43.741700   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:43.776552   67607 cri.go:89] found id: ""
	I0829 20:29:43.776595   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.776625   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:43.776635   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:43.776701   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:43.816229   67607 cri.go:89] found id: ""
	I0829 20:29:43.816264   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.816274   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:43.816281   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:43.816346   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:40.850705   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:42.850904   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:40.709080   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:43.207700   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:41.994216   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:44.492986   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:46.494171   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:43.860726   67607 cri.go:89] found id: ""
	I0829 20:29:43.860753   67607 logs.go:276] 0 containers: []
	W0829 20:29:43.860761   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:43.860768   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:43.860783   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:43.874311   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:43.874340   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:43.952243   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:43.952272   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:43.952288   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:44.032276   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:44.032312   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:44.075537   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:44.075571   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:46.632798   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:46.645878   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:46.645948   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:46.683682   67607 cri.go:89] found id: ""
	I0829 20:29:46.683711   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.683720   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:46.683726   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:46.683775   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:46.727985   67607 cri.go:89] found id: ""
	I0829 20:29:46.728012   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.728024   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:46.728031   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:46.728090   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:46.762142   67607 cri.go:89] found id: ""
	I0829 20:29:46.762166   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.762174   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:46.762180   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:46.762226   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:46.802423   67607 cri.go:89] found id: ""
	I0829 20:29:46.802453   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.802464   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:46.802471   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:46.802515   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:46.840382   67607 cri.go:89] found id: ""
	I0829 20:29:46.840411   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.840418   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:46.840425   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:46.840473   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:46.878438   67607 cri.go:89] found id: ""
	I0829 20:29:46.878466   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.878476   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:46.878483   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:46.878562   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:46.913589   67607 cri.go:89] found id: ""
	I0829 20:29:46.913618   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.913625   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:46.913631   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:46.913678   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:46.948894   67607 cri.go:89] found id: ""
	I0829 20:29:46.948922   67607 logs.go:276] 0 containers: []
	W0829 20:29:46.948929   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:46.948938   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:46.948949   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:47.005709   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:47.005745   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:47.030316   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:47.030343   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:47.105899   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:47.105920   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:47.105932   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:47.189405   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:47.189442   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:45.352639   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:47.850647   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:45.709140   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:48.207411   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:48.994239   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:51.493287   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:49.727745   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:49.742061   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:49.742131   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:49.777428   67607 cri.go:89] found id: ""
	I0829 20:29:49.777456   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.777464   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:49.777471   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:49.777531   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:49.811611   67607 cri.go:89] found id: ""
	I0829 20:29:49.811639   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.811646   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:49.811653   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:49.811709   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:49.844962   67607 cri.go:89] found id: ""
	I0829 20:29:49.844987   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.844995   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:49.845006   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:49.845062   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:49.880259   67607 cri.go:89] found id: ""
	I0829 20:29:49.880286   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.880297   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:49.880305   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:49.880366   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:49.915889   67607 cri.go:89] found id: ""
	I0829 20:29:49.915918   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.915926   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:49.915932   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:49.915988   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:49.953146   67607 cri.go:89] found id: ""
	I0829 20:29:49.953174   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.953182   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:49.953189   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:49.953240   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:49.990689   67607 cri.go:89] found id: ""
	I0829 20:29:49.990721   67607 logs.go:276] 0 containers: []
	W0829 20:29:49.990730   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:49.990738   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:49.990792   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:50.024775   67607 cri.go:89] found id: ""
	I0829 20:29:50.024806   67607 logs.go:276] 0 containers: []
	W0829 20:29:50.024817   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:50.024827   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:50.024842   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:50.079030   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:50.079064   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:50.093178   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:50.093205   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:50.171476   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:50.171499   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:50.171512   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:50.252913   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:50.252946   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:52.799818   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:52.812857   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:52.812930   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:52.850736   67607 cri.go:89] found id: ""
	I0829 20:29:52.850761   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.850770   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:52.850777   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:52.850834   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:52.888892   67607 cri.go:89] found id: ""
	I0829 20:29:52.888916   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.888923   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:52.888929   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:52.888975   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:52.925390   67607 cri.go:89] found id: ""
	I0829 20:29:52.925418   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.925428   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:52.925435   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:52.925501   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:52.960329   67607 cri.go:89] found id: ""
	I0829 20:29:52.960352   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.960360   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:52.960366   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:52.960413   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:52.994899   67607 cri.go:89] found id: ""
	I0829 20:29:52.994927   67607 logs.go:276] 0 containers: []
	W0829 20:29:52.994935   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:52.994941   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:52.994995   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:53.033028   67607 cri.go:89] found id: ""
	I0829 20:29:53.033057   67607 logs.go:276] 0 containers: []
	W0829 20:29:53.033068   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:53.033076   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:53.033136   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:53.068353   67607 cri.go:89] found id: ""
	I0829 20:29:53.068381   67607 logs.go:276] 0 containers: []
	W0829 20:29:53.068389   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:53.068394   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:53.068441   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:53.104496   67607 cri.go:89] found id: ""
	I0829 20:29:53.104524   67607 logs.go:276] 0 containers: []
	W0829 20:29:53.104534   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:53.104545   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:53.104560   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:53.175777   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:53.175810   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:53.175827   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:53.257362   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:53.257396   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:53.295822   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:53.295850   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:53.351237   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:53.351263   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:49.851324   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:52.350768   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:50.707986   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:53.206918   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:53.494828   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:55.994443   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:55.864680   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:55.879324   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:55.879391   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:55.914454   67607 cri.go:89] found id: ""
	I0829 20:29:55.914479   67607 logs.go:276] 0 containers: []
	W0829 20:29:55.914490   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:55.914498   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:55.914592   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:55.953778   67607 cri.go:89] found id: ""
	I0829 20:29:55.953804   67607 logs.go:276] 0 containers: []
	W0829 20:29:55.953814   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:55.953821   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:55.953883   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:55.994659   67607 cri.go:89] found id: ""
	I0829 20:29:55.994681   67607 logs.go:276] 0 containers: []
	W0829 20:29:55.994689   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:55.994697   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:55.994768   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:56.031262   67607 cri.go:89] found id: ""
	I0829 20:29:56.031288   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.031299   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:56.031306   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:56.031366   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:56.063748   67607 cri.go:89] found id: ""
	I0829 20:29:56.063776   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.063785   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:56.063793   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:56.063883   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:56.098024   67607 cri.go:89] found id: ""
	I0829 20:29:56.098060   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.098068   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:56.098074   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:56.098127   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:56.141340   67607 cri.go:89] found id: ""
	I0829 20:29:56.141364   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.141374   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:56.141381   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:56.141440   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:56.176668   67607 cri.go:89] found id: ""
	I0829 20:29:56.176696   67607 logs.go:276] 0 containers: []
	W0829 20:29:56.176707   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:56.176717   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:56.176731   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:56.216294   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:56.216322   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:56.269404   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:56.269440   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:56.283134   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:56.283160   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:56.355005   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:56.355023   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:56.355035   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:54.851658   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:57.350247   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:55.207477   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:57.708007   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:58.493689   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:00.998990   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:58.937406   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:29:58.950924   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:29:58.950981   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:29:58.986748   67607 cri.go:89] found id: ""
	I0829 20:29:58.986778   67607 logs.go:276] 0 containers: []
	W0829 20:29:58.986788   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:29:58.986795   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:29:58.986861   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:29:59.023737   67607 cri.go:89] found id: ""
	I0829 20:29:59.023763   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.023773   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:29:59.023780   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:29:59.023840   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:29:59.060245   67607 cri.go:89] found id: ""
	I0829 20:29:59.060274   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.060284   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:29:59.060291   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:29:59.060352   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:29:59.102467   67607 cri.go:89] found id: ""
	I0829 20:29:59.102493   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.102501   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:29:59.102507   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:29:59.102581   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:29:59.142601   67607 cri.go:89] found id: ""
	I0829 20:29:59.142625   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.142634   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:29:59.142647   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:29:59.142717   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:29:59.186683   67607 cri.go:89] found id: ""
	I0829 20:29:59.186707   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.186715   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:29:59.186723   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:29:59.186783   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:29:59.232104   67607 cri.go:89] found id: ""
	I0829 20:29:59.232136   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.232154   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:29:59.232162   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:29:59.232227   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:29:59.276416   67607 cri.go:89] found id: ""
	I0829 20:29:59.276442   67607 logs.go:276] 0 containers: []
	W0829 20:29:59.276452   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:29:59.276462   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:29:59.276479   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:29:59.341741   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:29:59.341779   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:29:59.357312   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:29:59.357336   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:29:59.425653   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:29:59.425674   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:29:59.425689   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:29:59.505365   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:29:59.505403   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:02.049195   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:02.064558   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:02.064641   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:02.102141   67607 cri.go:89] found id: ""
	I0829 20:30:02.102188   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.102209   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:02.102217   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:02.102282   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:02.138610   67607 cri.go:89] found id: ""
	I0829 20:30:02.138640   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.138650   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:02.138658   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:02.138724   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:02.175391   67607 cri.go:89] found id: ""
	I0829 20:30:02.175423   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.175435   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:02.175442   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:02.175505   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:02.212956   67607 cri.go:89] found id: ""
	I0829 20:30:02.212981   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.212991   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:02.212998   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:02.213059   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:02.254444   67607 cri.go:89] found id: ""
	I0829 20:30:02.254467   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.254475   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:02.254481   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:02.254568   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:02.293232   67607 cri.go:89] found id: ""
	I0829 20:30:02.293260   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.293270   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:02.293277   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:02.293348   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:02.328300   67607 cri.go:89] found id: ""
	I0829 20:30:02.328329   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.328339   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:02.328346   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:02.328407   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:02.363467   67607 cri.go:89] found id: ""
	I0829 20:30:02.363495   67607 logs.go:276] 0 containers: []
	W0829 20:30:02.363505   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:02.363514   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:02.363528   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:02.414357   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:02.414394   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:02.428229   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:02.428259   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:02.503640   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:02.503661   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:02.503674   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:02.584052   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:02.584087   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:29:59.352485   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:01.850334   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:29:59.717029   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:02.208354   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:03.494326   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:05.494833   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:05.124345   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:05.143530   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:05.143594   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:05.195985   67607 cri.go:89] found id: ""
	I0829 20:30:05.196014   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.196024   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:05.196032   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:05.196092   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:05.254315   67607 cri.go:89] found id: ""
	I0829 20:30:05.254343   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.254354   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:05.254362   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:05.254432   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:05.306756   67607 cri.go:89] found id: ""
	I0829 20:30:05.306781   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.306788   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:05.306794   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:05.306852   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:05.345200   67607 cri.go:89] found id: ""
	I0829 20:30:05.345225   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.345235   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:05.345242   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:05.345297   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:05.384038   67607 cri.go:89] found id: ""
	I0829 20:30:05.384064   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.384074   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:05.384081   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:05.384140   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:05.420177   67607 cri.go:89] found id: ""
	I0829 20:30:05.420201   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.420208   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:05.420214   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:05.420260   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:05.453492   67607 cri.go:89] found id: ""
	I0829 20:30:05.453513   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.453521   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:05.453526   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:05.453573   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:05.491591   67607 cri.go:89] found id: ""
	I0829 20:30:05.491618   67607 logs.go:276] 0 containers: []
	W0829 20:30:05.491628   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:05.491638   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:05.491701   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:05.580458   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:05.580503   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:05.620137   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:05.620169   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:05.672137   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:05.672177   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:05.685946   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:05.685973   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:05.755176   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:08.256255   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:08.269099   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:08.269160   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:08.302552   67607 cri.go:89] found id: ""
	I0829 20:30:08.302578   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.302585   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:08.302591   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:08.302639   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:08.340683   67607 cri.go:89] found id: ""
	I0829 20:30:08.340711   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.340718   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:08.340726   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:08.340778   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:08.387389   67607 cri.go:89] found id: ""
	I0829 20:30:08.387416   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.387424   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:08.387430   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:08.387477   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:08.421303   67607 cri.go:89] found id: ""
	I0829 20:30:08.421330   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.421340   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:08.421348   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:08.421409   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:08.458648   67607 cri.go:89] found id: ""
	I0829 20:30:08.458677   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.458688   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:08.458695   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:08.458758   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:08.498748   67607 cri.go:89] found id: ""
	I0829 20:30:08.498776   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.498784   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:08.498790   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:08.498845   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:08.536859   67607 cri.go:89] found id: ""
	I0829 20:30:08.536889   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.536896   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:08.536902   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:08.536963   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:08.570685   67607 cri.go:89] found id: ""
	I0829 20:30:08.570713   67607 logs.go:276] 0 containers: []
	W0829 20:30:08.570723   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:08.570734   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:08.570748   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:08.621904   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:08.621938   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:08.636367   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:08.636391   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:08.703796   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:08.703824   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:08.703838   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:08.785084   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:08.785120   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:04.350230   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:06.849598   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:08.850961   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:04.708012   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:07.206604   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:09.207368   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:07.993015   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:09.994043   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:11.326633   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:11.339570   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:11.339637   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:11.374132   67607 cri.go:89] found id: ""
	I0829 20:30:11.374155   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.374163   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:11.374169   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:11.374234   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:11.409004   67607 cri.go:89] found id: ""
	I0829 20:30:11.409036   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.409047   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:11.409054   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:11.409119   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:11.444598   67607 cri.go:89] found id: ""
	I0829 20:30:11.444625   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.444635   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:11.444643   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:11.444704   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:11.481912   67607 cri.go:89] found id: ""
	I0829 20:30:11.481942   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.481953   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:11.481961   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:11.482025   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:11.516436   67607 cri.go:89] found id: ""
	I0829 20:30:11.516466   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.516477   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:11.516483   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:11.516536   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:11.554762   67607 cri.go:89] found id: ""
	I0829 20:30:11.554787   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.554795   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:11.554801   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:11.554857   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:11.588902   67607 cri.go:89] found id: ""
	I0829 20:30:11.588931   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.588942   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:11.588950   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:11.589011   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:11.621346   67607 cri.go:89] found id: ""
	I0829 20:30:11.621368   67607 logs.go:276] 0 containers: []
	W0829 20:30:11.621376   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:11.621383   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:11.621395   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:11.659671   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:11.659703   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:11.711288   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:11.711315   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:11.725285   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:11.725310   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:11.801713   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:11.801735   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:11.801750   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:10.851075   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:13.349510   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:11.208203   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:13.706599   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:12.494548   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:14.993188   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:14.382313   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:14.395852   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:14.395926   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:14.438735   67607 cri.go:89] found id: ""
	I0829 20:30:14.438762   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.438772   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:14.438778   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:14.438840   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:14.477886   67607 cri.go:89] found id: ""
	I0829 20:30:14.477928   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.477937   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:14.477943   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:14.478000   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:14.517627   67607 cri.go:89] found id: ""
	I0829 20:30:14.517654   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.517664   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:14.517670   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:14.517734   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:14.557247   67607 cri.go:89] found id: ""
	I0829 20:30:14.557272   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.557280   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:14.557286   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:14.557345   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:14.591364   67607 cri.go:89] found id: ""
	I0829 20:30:14.591388   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.591398   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:14.591406   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:14.591468   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:14.627517   67607 cri.go:89] found id: ""
	I0829 20:30:14.627539   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.627546   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:14.627551   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:14.627604   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:14.662388   67607 cri.go:89] found id: ""
	I0829 20:30:14.662409   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.662419   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:14.662432   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:14.662488   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:14.695277   67607 cri.go:89] found id: ""
	I0829 20:30:14.695307   67607 logs.go:276] 0 containers: []
	W0829 20:30:14.695316   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:14.695324   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:14.695335   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:14.735824   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:14.735852   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:14.792607   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:14.792642   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:14.808881   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:14.808910   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:14.879804   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:14.879824   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:14.879837   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:17.459817   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:17.474813   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:17.474887   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:17.509885   67607 cri.go:89] found id: ""
	I0829 20:30:17.509913   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.509923   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:17.509930   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:17.509987   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:17.543931   67607 cri.go:89] found id: ""
	I0829 20:30:17.543959   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.543968   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:17.543973   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:17.544021   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:17.580944   67607 cri.go:89] found id: ""
	I0829 20:30:17.580972   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.580980   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:17.580986   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:17.581033   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:17.620061   67607 cri.go:89] found id: ""
	I0829 20:30:17.620088   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.620097   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:17.620103   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:17.620148   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:17.658675   67607 cri.go:89] found id: ""
	I0829 20:30:17.658706   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.658717   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:17.658724   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:17.658788   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:17.694424   67607 cri.go:89] found id: ""
	I0829 20:30:17.694453   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.694462   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:17.694467   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:17.694571   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:17.727425   67607 cri.go:89] found id: ""
	I0829 20:30:17.727450   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.727456   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:17.727462   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:17.727510   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:17.767915   67607 cri.go:89] found id: ""
	I0829 20:30:17.767946   67607 logs.go:276] 0 containers: []
	W0829 20:30:17.767956   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:17.767965   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:17.767977   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:17.837556   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:17.837580   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:17.837593   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:17.921601   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:17.921638   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:17.960999   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:17.961026   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:18.013654   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:18.013691   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:15.351372   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:17.850896   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:16.206810   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:18.207702   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:16.993566   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:18.997786   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:21.493705   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:20.528244   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:20.542116   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:20.542190   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:20.578905   67607 cri.go:89] found id: ""
	I0829 20:30:20.578936   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.578947   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:20.578954   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:20.579003   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:20.613543   67607 cri.go:89] found id: ""
	I0829 20:30:20.613567   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.613574   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:20.613579   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:20.613627   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:20.649322   67607 cri.go:89] found id: ""
	I0829 20:30:20.649344   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.649352   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:20.649366   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:20.649429   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:20.684851   67607 cri.go:89] found id: ""
	I0829 20:30:20.684878   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.684886   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:20.684892   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:20.684950   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:20.722016   67607 cri.go:89] found id: ""
	I0829 20:30:20.722045   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.722054   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:20.722062   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:20.722125   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:20.757594   67607 cri.go:89] found id: ""
	I0829 20:30:20.757626   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.757637   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:20.757644   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:20.757707   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:20.793694   67607 cri.go:89] found id: ""
	I0829 20:30:20.793728   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.793738   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:20.793746   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:20.793812   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:20.829709   67607 cri.go:89] found id: ""
	I0829 20:30:20.829736   67607 logs.go:276] 0 containers: []
	W0829 20:30:20.829747   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:20.829758   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:20.829782   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:20.888838   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:20.888888   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:20.903530   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:20.903556   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:20.972460   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:20.972488   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:20.972503   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:21.055556   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:21.055593   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:23.597355   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:23.611091   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:23.611162   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:23.649469   67607 cri.go:89] found id: ""
	I0829 20:30:23.649493   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.649501   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:23.649510   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:23.649562   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:23.684530   67607 cri.go:89] found id: ""
	I0829 20:30:23.684554   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.684561   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:23.684571   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:23.684625   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:23.720466   67607 cri.go:89] found id: ""
	I0829 20:30:23.720493   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.720503   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:23.720510   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:23.720563   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:23.755013   67607 cri.go:89] found id: ""
	I0829 20:30:23.755042   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.755053   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:23.755061   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:23.755127   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:23.795212   67607 cri.go:89] found id: ""
	I0829 20:30:23.795243   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.795254   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:23.795263   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:23.795320   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:20.349781   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:22.350157   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:20.707723   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:23.206214   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:23.994457   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:26.493771   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:23.832912   67607 cri.go:89] found id: ""
	I0829 20:30:23.832941   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.832951   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:23.832959   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:23.833015   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:23.869896   67607 cri.go:89] found id: ""
	I0829 20:30:23.869930   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.869939   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:23.869947   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:23.870011   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:23.908111   67607 cri.go:89] found id: ""
	I0829 20:30:23.908136   67607 logs.go:276] 0 containers: []
	W0829 20:30:23.908145   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:23.908155   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:23.908170   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:23.988489   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:23.988510   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:23.988525   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:24.063246   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:24.063280   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:24.102943   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:24.102974   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:24.157255   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:24.157294   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:26.671966   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:26.684755   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:26.684830   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:26.721125   67607 cri.go:89] found id: ""
	I0829 20:30:26.721150   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.721158   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:26.721164   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:26.721219   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:26.756328   67607 cri.go:89] found id: ""
	I0829 20:30:26.756349   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.756356   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:26.756362   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:26.756420   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:26.791711   67607 cri.go:89] found id: ""
	I0829 20:30:26.791751   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.791763   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:26.791774   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:26.791857   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:26.827215   67607 cri.go:89] found id: ""
	I0829 20:30:26.827244   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.827254   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:26.827261   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:26.827321   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:26.863461   67607 cri.go:89] found id: ""
	I0829 20:30:26.863486   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.863497   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:26.863505   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:26.863569   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:26.900037   67607 cri.go:89] found id: ""
	I0829 20:30:26.900065   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.900075   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:26.900083   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:26.900139   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:26.937236   67607 cri.go:89] found id: ""
	I0829 20:30:26.937263   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.937274   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:26.937282   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:26.937340   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:26.970281   67607 cri.go:89] found id: ""
	I0829 20:30:26.970312   67607 logs.go:276] 0 containers: []
	W0829 20:30:26.970322   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:26.970332   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:26.970345   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:27.041485   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:27.041511   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:27.041526   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:27.120774   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:27.120807   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:27.159656   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:27.159685   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:27.213322   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:27.213356   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:24.350464   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:26.351419   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:28.850079   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:25.207838   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:27.708107   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:28.993552   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:31.494259   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:29.729066   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:29.742044   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:29.742099   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:29.777426   67607 cri.go:89] found id: ""
	I0829 20:30:29.777454   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.777462   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:30:29.777468   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:29.777529   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:29.814353   67607 cri.go:89] found id: ""
	I0829 20:30:29.814381   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.814392   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:30:29.814401   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:29.814462   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:29.853754   67607 cri.go:89] found id: ""
	I0829 20:30:29.853783   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.853793   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:30:29.853801   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:29.853869   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:29.893966   67607 cri.go:89] found id: ""
	I0829 20:30:29.893991   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.893998   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:30:29.894003   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:29.894057   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:29.929452   67607 cri.go:89] found id: ""
	I0829 20:30:29.929483   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.929492   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:30:29.929502   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:29.929561   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:29.965880   67607 cri.go:89] found id: ""
	I0829 20:30:29.965906   67607 logs.go:276] 0 containers: []
	W0829 20:30:29.965916   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:30:29.965924   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:29.965986   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:30.002192   67607 cri.go:89] found id: ""
	I0829 20:30:30.002226   67607 logs.go:276] 0 containers: []
	W0829 20:30:30.002237   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:30.002245   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:30:30.002320   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:30:30.037603   67607 cri.go:89] found id: ""
	I0829 20:30:30.037640   67607 logs.go:276] 0 containers: []
	W0829 20:30:30.037651   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:30:30.037662   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:30.037677   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:30.094128   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:30.094168   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:30.110667   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:30.110701   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:30:30.188355   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:30:30.188375   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:30.188388   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:30.270750   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:30:30.270785   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:32.809472   67607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:32.823099   67607 kubeadm.go:597] duration metric: took 4m3.15684598s to restartPrimaryControlPlane
	W0829 20:30:32.823188   67607 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 20:30:32.823224   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:30:33.322987   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:30:33.338134   67607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:30:33.348586   67607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:30:33.358672   67607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:30:33.358692   67607 kubeadm.go:157] found existing configuration files:
	
	I0829 20:30:33.358748   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:30:33.367955   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:30:33.368000   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:30:33.377565   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:30:33.386317   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:30:33.386377   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:30:33.396356   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:30:33.406228   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:30:33.406281   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:30:33.418323   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:30:33.427595   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:30:33.427657   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:30:33.437520   67607 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:30:33.511159   67607 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 20:30:33.511279   67607 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:30:33.669988   67607 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:30:33.670133   67607 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:30:33.670267   67607 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 20:30:33.859908   67607 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:30:30.850893   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:32.851574   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:30.207012   66989 pod_ready.go:103] pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:32.206405   66989 pod_ready.go:82] duration metric: took 4m0.005864609s for pod "metrics-server-6867b74b74-mx5jh" in "kube-system" namespace to be "Ready" ...
	E0829 20:30:32.206426   66989 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0829 20:30:32.206433   66989 pod_ready.go:39] duration metric: took 4m5.570928284s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:30:32.206448   66989 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:30:32.206482   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:32.206528   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:32.260213   66989 cri.go:89] found id: "f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:32.260242   66989 cri.go:89] found id: ""
	I0829 20:30:32.260252   66989 logs.go:276] 1 containers: [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313]
	I0829 20:30:32.260314   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.265201   66989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:32.265276   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:32.307620   66989 cri.go:89] found id: "5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:32.307648   66989 cri.go:89] found id: ""
	I0829 20:30:32.307656   66989 logs.go:276] 1 containers: [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6]
	I0829 20:30:32.307701   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.312372   66989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:32.312430   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:32.350059   66989 cri.go:89] found id: "64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:32.350092   66989 cri.go:89] found id: ""
	I0829 20:30:32.350102   66989 logs.go:276] 1 containers: [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71]
	I0829 20:30:32.350158   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.354624   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:32.354681   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:32.393968   66989 cri.go:89] found id: "daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:32.393988   66989 cri.go:89] found id: ""
	I0829 20:30:32.393995   66989 logs.go:276] 1 containers: [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334]
	I0829 20:30:32.394039   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.398674   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:32.398745   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:32.433038   66989 cri.go:89] found id: "05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:32.433064   66989 cri.go:89] found id: ""
	I0829 20:30:32.433074   66989 logs.go:276] 1 containers: [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f]
	I0829 20:30:32.433118   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.436969   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:32.437028   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:32.472768   66989 cri.go:89] found id: "29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:32.472786   66989 cri.go:89] found id: ""
	I0829 20:30:32.472793   66989 logs.go:276] 1 containers: [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd]
	I0829 20:30:32.472842   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.477466   66989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:32.477536   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:32.514464   66989 cri.go:89] found id: ""
	I0829 20:30:32.514492   66989 logs.go:276] 0 containers: []
	W0829 20:30:32.514502   66989 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:32.514509   66989 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0829 20:30:32.514591   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0829 20:30:32.551429   66989 cri.go:89] found id: "668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:32.551452   66989 cri.go:89] found id: "585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:32.551456   66989 cri.go:89] found id: ""
	I0829 20:30:32.551463   66989 logs.go:276] 2 containers: [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523]
	I0829 20:30:32.551508   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.555697   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:32.559864   66989 logs.go:123] Gathering logs for kube-apiserver [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313] ...
	I0829 20:30:32.559883   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:32.609776   66989 logs.go:123] Gathering logs for coredns [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71] ...
	I0829 20:30:32.609803   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:32.648419   66989 logs.go:123] Gathering logs for kube-scheduler [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334] ...
	I0829 20:30:32.648446   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:32.685938   66989 logs.go:123] Gathering logs for storage-provisioner [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c] ...
	I0829 20:30:32.685969   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:32.728665   66989 logs.go:123] Gathering logs for container status ...
	I0829 20:30:32.728693   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:32.770030   66989 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:32.770068   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 20:30:32.907821   66989 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:32.907850   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:32.923119   66989 logs.go:123] Gathering logs for etcd [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6] ...
	I0829 20:30:32.923149   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:32.979819   66989 logs.go:123] Gathering logs for kube-proxy [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f] ...
	I0829 20:30:32.979853   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:33.020472   66989 logs.go:123] Gathering logs for kube-controller-manager [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd] ...
	I0829 20:30:33.020496   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:33.074802   66989 logs.go:123] Gathering logs for storage-provisioner [585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523] ...
	I0829 20:30:33.074838   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:33.112043   66989 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:33.112072   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:33.624274   66989 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:33.624316   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:33.861742   67607 out.go:235]   - Generating certificates and keys ...
	I0829 20:30:33.861849   67607 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:30:33.861946   67607 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:30:33.862075   67607 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:30:33.862174   67607 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:30:33.862276   67607 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:30:33.862366   67607 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:30:33.862467   67607 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:30:33.862573   67607 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:30:33.862794   67607 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:30:33.863226   67607 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:30:33.863323   67607 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:30:33.863417   67607 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:30:34.065914   67607 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:30:34.235581   67607 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:30:34.660452   67607 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:30:34.724718   67607 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:30:34.743897   67607 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:30:34.746263   67607 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:30:34.746369   67607 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:30:34.893824   67607 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:30:33.494825   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:35.994300   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:34.895805   67607 out.go:235]   - Booting up control plane ...
	I0829 20:30:34.895941   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:30:34.904294   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:30:34.915103   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:30:34.915744   67607 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:30:34.917923   67607 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 20:30:35.351975   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:37.352013   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:36.202184   66989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:30:36.218838   66989 api_server.go:72] duration metric: took 4m17.334186395s to wait for apiserver process to appear ...
	I0829 20:30:36.218870   66989 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:30:36.218910   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:36.218963   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:36.263205   66989 cri.go:89] found id: "f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:36.263233   66989 cri.go:89] found id: ""
	I0829 20:30:36.263243   66989 logs.go:276] 1 containers: [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313]
	I0829 20:30:36.263292   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.267466   66989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:36.267522   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:36.303894   66989 cri.go:89] found id: "5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:36.303930   66989 cri.go:89] found id: ""
	I0829 20:30:36.303938   66989 logs.go:276] 1 containers: [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6]
	I0829 20:30:36.303996   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.308089   66989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:36.308170   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:36.347320   66989 cri.go:89] found id: "64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:36.347392   66989 cri.go:89] found id: ""
	I0829 20:30:36.347414   66989 logs.go:276] 1 containers: [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71]
	I0829 20:30:36.347485   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.352121   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:36.352174   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:36.389760   66989 cri.go:89] found id: "daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:36.389784   66989 cri.go:89] found id: ""
	I0829 20:30:36.389793   66989 logs.go:276] 1 containers: [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334]
	I0829 20:30:36.389853   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.394860   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:36.394919   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:36.430562   66989 cri.go:89] found id: "05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:36.430587   66989 cri.go:89] found id: ""
	I0829 20:30:36.430597   66989 logs.go:276] 1 containers: [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f]
	I0829 20:30:36.430655   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.435151   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:36.435226   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:36.470714   66989 cri.go:89] found id: "29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:36.470742   66989 cri.go:89] found id: ""
	I0829 20:30:36.470750   66989 logs.go:276] 1 containers: [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd]
	I0829 20:30:36.470816   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.475382   66989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:36.475446   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:36.514853   66989 cri.go:89] found id: ""
	I0829 20:30:36.514888   66989 logs.go:276] 0 containers: []
	W0829 20:30:36.514898   66989 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:36.514910   66989 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0829 20:30:36.514971   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0829 20:30:36.548229   66989 cri.go:89] found id: "668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:36.548252   66989 cri.go:89] found id: "585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:36.548256   66989 cri.go:89] found id: ""
	I0829 20:30:36.548263   66989 logs.go:276] 2 containers: [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523]
	I0829 20:30:36.548314   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.552484   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:36.556661   66989 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:36.556681   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:36.622985   66989 logs.go:123] Gathering logs for etcd [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6] ...
	I0829 20:30:36.623019   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:36.678770   66989 logs.go:123] Gathering logs for kube-controller-manager [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd] ...
	I0829 20:30:36.678799   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:36.731822   66989 logs.go:123] Gathering logs for storage-provisioner [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c] ...
	I0829 20:30:36.731849   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:36.768451   66989 logs.go:123] Gathering logs for storage-provisioner [585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523] ...
	I0829 20:30:36.768482   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:36.803818   66989 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:36.803846   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:37.225805   66989 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:37.225849   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:37.245421   66989 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:37.245458   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 20:30:37.358238   66989 logs.go:123] Gathering logs for kube-apiserver [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313] ...
	I0829 20:30:37.358266   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:37.401876   66989 logs.go:123] Gathering logs for coredns [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71] ...
	I0829 20:30:37.401913   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:37.438189   66989 logs.go:123] Gathering logs for kube-scheduler [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334] ...
	I0829 20:30:37.438223   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:37.475404   66989 logs.go:123] Gathering logs for kube-proxy [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f] ...
	I0829 20:30:37.475433   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:37.511876   66989 logs.go:123] Gathering logs for container status ...
	I0829 20:30:37.511903   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:38.493604   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:40.494396   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:40.054097   66989 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0829 20:30:40.058474   66989 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0829 20:30:40.059830   66989 api_server.go:141] control plane version: v1.31.0
	I0829 20:30:40.059850   66989 api_server.go:131] duration metric: took 3.840972907s to wait for apiserver health ...
	I0829 20:30:40.059857   66989 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:30:40.059877   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:30:40.059924   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:30:40.101978   66989 cri.go:89] found id: "f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:40.102003   66989 cri.go:89] found id: ""
	I0829 20:30:40.102013   66989 logs.go:276] 1 containers: [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313]
	I0829 20:30:40.102073   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.107429   66989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:30:40.107496   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:30:40.145052   66989 cri.go:89] found id: "5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:40.145078   66989 cri.go:89] found id: ""
	I0829 20:30:40.145086   66989 logs.go:276] 1 containers: [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6]
	I0829 20:30:40.145133   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.149329   66989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:30:40.149394   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:30:40.187740   66989 cri.go:89] found id: "64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:40.187769   66989 cri.go:89] found id: ""
	I0829 20:30:40.187778   66989 logs.go:276] 1 containers: [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71]
	I0829 20:30:40.187838   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.192085   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:30:40.192156   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:30:40.231992   66989 cri.go:89] found id: "daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:40.232010   66989 cri.go:89] found id: ""
	I0829 20:30:40.232017   66989 logs.go:276] 1 containers: [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334]
	I0829 20:30:40.232060   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.236275   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:30:40.236333   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:30:40.279637   66989 cri.go:89] found id: "05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:40.279660   66989 cri.go:89] found id: ""
	I0829 20:30:40.279669   66989 logs.go:276] 1 containers: [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f]
	I0829 20:30:40.279727   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.288800   66989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:30:40.288876   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:30:40.341222   66989 cri.go:89] found id: "29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:40.341248   66989 cri.go:89] found id: ""
	I0829 20:30:40.341258   66989 logs.go:276] 1 containers: [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd]
	I0829 20:30:40.341322   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.346013   66989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:30:40.346088   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:30:40.383801   66989 cri.go:89] found id: ""
	I0829 20:30:40.383828   66989 logs.go:276] 0 containers: []
	W0829 20:30:40.383836   66989 logs.go:278] No container was found matching "kindnet"
	I0829 20:30:40.383842   66989 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0829 20:30:40.383896   66989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0829 20:30:40.421847   66989 cri.go:89] found id: "668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:40.421874   66989 cri.go:89] found id: "585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:40.421879   66989 cri.go:89] found id: ""
	I0829 20:30:40.421889   66989 logs.go:276] 2 containers: [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523]
	I0829 20:30:40.421950   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.426229   66989 ssh_runner.go:195] Run: which crictl
	I0829 20:30:40.429902   66989 logs.go:123] Gathering logs for storage-provisioner [585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523] ...
	I0829 20:30:40.429931   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 585208cde484ffffd0fc161f30592d65949f968441fe3e2579d2cd692ff94523"
	I0829 20:30:40.471015   66989 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:30:40.471039   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:30:40.831575   66989 logs.go:123] Gathering logs for dmesg ...
	I0829 20:30:40.831612   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:30:40.846195   66989 logs.go:123] Gathering logs for etcd [5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6] ...
	I0829 20:30:40.846230   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ea75e14a71df4f1c6fa3abeb161ddf38f7d1b00350668ef8f47c975e44923d6"
	I0829 20:30:40.905469   66989 logs.go:123] Gathering logs for kube-scheduler [daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334] ...
	I0829 20:30:40.905507   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daeb4a7c3dc7074744a23b7d3b2f4d51ee7d1aa8089d872973b802d3a2a2f334"
	I0829 20:30:40.952303   66989 logs.go:123] Gathering logs for kube-proxy [05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f] ...
	I0829 20:30:40.952337   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05148cf016224eda2906ae21edd3d5fed9601f7a331e8c2b7217a65c400f583f"
	I0829 20:30:41.001278   66989 logs.go:123] Gathering logs for kube-controller-manager [29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd] ...
	I0829 20:30:41.001309   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29d4eb837325f8eaadf00a72c07080e4e194a9b1c01745601626b77a07fdc4dd"
	I0829 20:30:41.071045   66989 logs.go:123] Gathering logs for container status ...
	I0829 20:30:41.071089   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:30:41.120024   66989 logs.go:123] Gathering logs for kubelet ...
	I0829 20:30:41.120050   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 20:30:41.191412   66989 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:30:41.191445   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 20:30:41.321848   66989 logs.go:123] Gathering logs for kube-apiserver [f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313] ...
	I0829 20:30:41.321874   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c67cb1f348e3669153501e2e181cbf6d67a788726f6379e391e48594745313"
	I0829 20:30:41.370807   66989 logs.go:123] Gathering logs for coredns [64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71] ...
	I0829 20:30:41.370833   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64cc61492bb7fb9cf5a2a415ba7c456c37d79d46e0bafa6715d0127c95ebaf71"
	I0829 20:30:41.405913   66989 logs.go:123] Gathering logs for storage-provisioner [668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c] ...
	I0829 20:30:41.405939   66989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 668d380506744a50f354191919210df0ec35f981ac9e7560bec958cb1c37a90c"
	I0829 20:30:43.948957   66989 system_pods.go:59] 8 kube-system pods found
	I0829 20:30:43.948987   66989 system_pods.go:61] "coredns-6f6b679f8f-dg6t6" [92e89b20-ebf4-4738-8ca7-9dc2a0e5653a] Running
	I0829 20:30:43.948992   66989 system_pods.go:61] "etcd-embed-certs-388383" [a688325a-9ed2-488d-a1a1-aa440e37fa9f] Running
	I0829 20:30:43.948996   66989 system_pods.go:61] "kube-apiserver-embed-certs-388383" [7a1b715b-87a3-44e0-868d-a3184f5b9f61] Running
	I0829 20:30:43.948999   66989 system_pods.go:61] "kube-controller-manager-embed-certs-388383" [9d942083-4d39-448c-8151-424ea9d5e6af] Running
	I0829 20:30:43.949003   66989 system_pods.go:61] "kube-proxy-fcxs4" [649b40c8-4f4b-40d1-8179-baf378d4c7d7] Running
	I0829 20:30:43.949006   66989 system_pods.go:61] "kube-scheduler-embed-certs-388383" [87b73013-dfad-411d-aaa9-f2c0e39fb920] Running
	I0829 20:30:43.949011   66989 system_pods.go:61] "metrics-server-6867b74b74-mx5jh" [99e21acd-b7b8-4e6f-8c75-c112206aed89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:30:43.949015   66989 system_pods.go:61] "storage-provisioner" [021ca156-b7a8-4647-8efe-db17968fd5a8] Running
	I0829 20:30:43.949022   66989 system_pods.go:74] duration metric: took 3.889159839s to wait for pod list to return data ...
	I0829 20:30:43.949028   66989 default_sa.go:34] waiting for default service account to be created ...
	I0829 20:30:43.951906   66989 default_sa.go:45] found service account: "default"
	I0829 20:30:43.951932   66989 default_sa.go:55] duration metric: took 2.897769ms for default service account to be created ...
	I0829 20:30:43.951943   66989 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 20:30:43.959246   66989 system_pods.go:86] 8 kube-system pods found
	I0829 20:30:43.959269   66989 system_pods.go:89] "coredns-6f6b679f8f-dg6t6" [92e89b20-ebf4-4738-8ca7-9dc2a0e5653a] Running
	I0829 20:30:43.959275   66989 system_pods.go:89] "etcd-embed-certs-388383" [a688325a-9ed2-488d-a1a1-aa440e37fa9f] Running
	I0829 20:30:43.959279   66989 system_pods.go:89] "kube-apiserver-embed-certs-388383" [7a1b715b-87a3-44e0-868d-a3184f5b9f61] Running
	I0829 20:30:43.959283   66989 system_pods.go:89] "kube-controller-manager-embed-certs-388383" [9d942083-4d39-448c-8151-424ea9d5e6af] Running
	I0829 20:30:43.959286   66989 system_pods.go:89] "kube-proxy-fcxs4" [649b40c8-4f4b-40d1-8179-baf378d4c7d7] Running
	I0829 20:30:43.959290   66989 system_pods.go:89] "kube-scheduler-embed-certs-388383" [87b73013-dfad-411d-aaa9-f2c0e39fb920] Running
	I0829 20:30:43.959296   66989 system_pods.go:89] "metrics-server-6867b74b74-mx5jh" [99e21acd-b7b8-4e6f-8c75-c112206aed89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:30:43.959302   66989 system_pods.go:89] "storage-provisioner" [021ca156-b7a8-4647-8efe-db17968fd5a8] Running
	I0829 20:30:43.959309   66989 system_pods.go:126] duration metric: took 7.361244ms to wait for k8s-apps to be running ...
	I0829 20:30:43.959318   66989 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 20:30:43.959356   66989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:30:43.976136   66989 system_svc.go:56] duration metric: took 16.811475ms WaitForService to wait for kubelet
	I0829 20:30:43.976167   66989 kubeadm.go:582] duration metric: took 4m25.091518378s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:30:43.976193   66989 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:30:43.979345   66989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:30:43.979376   66989 node_conditions.go:123] node cpu capacity is 2
	I0829 20:30:43.979386   66989 node_conditions.go:105] duration metric: took 3.187489ms to run NodePressure ...
	I0829 20:30:43.979396   66989 start.go:241] waiting for startup goroutines ...
	I0829 20:30:43.979402   66989 start.go:246] waiting for cluster config update ...
	I0829 20:30:43.979414   66989 start.go:255] writing updated cluster config ...
	I0829 20:30:43.979729   66989 ssh_runner.go:195] Run: rm -f paused
	I0829 20:30:44.028715   66989 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 20:30:44.030675   66989 out.go:177] * Done! kubectl is now configured to use "embed-certs-388383" cluster and "default" namespace by default
	I0829 20:30:39.850811   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:41.850941   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:42.993711   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:45.492729   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:44.351171   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:46.849842   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:48.851125   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:47.494031   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:49.993291   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:51.350926   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:53.850966   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:52.494604   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:54.994054   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:56.350237   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:58.856068   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:56.994483   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:30:59.494879   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:01.351293   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:03.850415   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:01.994470   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:04.493393   68084 pod_ready.go:103] pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:05.851663   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:08.350513   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:06.988349   68084 pod_ready.go:82] duration metric: took 4m0.000994859s for pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace to be "Ready" ...
	E0829 20:31:06.988378   68084 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-5kk6q" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 20:31:06.988396   68084 pod_ready.go:39] duration metric: took 4m13.5587561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:31:06.988421   68084 kubeadm.go:597] duration metric: took 4m20.63419422s to restartPrimaryControlPlane
	W0829 20:31:06.988470   68084 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 20:31:06.988492   68084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:31:10.350782   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:12.851120   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:14.919490   67607 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 20:31:14.920124   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:14.920395   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:15.350794   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:17.351675   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:19.920740   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:19.920993   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:19.858714   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:22.351208   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:24.851679   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:27.351087   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:33.177614   68084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.189095849s)
	I0829 20:31:33.177712   68084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:31:33.202840   68084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:31:33.220648   68084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:31:33.239458   68084 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:31:33.239479   68084 kubeadm.go:157] found existing configuration files:
	
	I0829 20:31:33.239519   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0829 20:31:33.257831   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:31:33.257900   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:31:33.272621   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0829 20:31:33.287906   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:31:33.287975   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:31:33.302931   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0829 20:31:33.312359   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:31:33.312411   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:31:33.322850   68084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0829 20:31:33.332224   68084 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:31:33.332280   68084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:31:33.342072   68084 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:31:33.388790   68084 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 20:31:33.388844   68084 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:31:33.506108   68084 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:31:33.506263   68084 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:31:33.506403   68084 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 20:31:33.515467   68084 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:31:29.921355   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:29.921591   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:29.351212   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:31.351683   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:33.850337   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:33.517487   68084 out.go:235]   - Generating certificates and keys ...
	I0829 20:31:33.517590   68084 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:31:33.517697   68084 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:31:33.517809   68084 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:31:33.517907   68084 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:31:33.518009   68084 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:31:33.518086   68084 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:31:33.518174   68084 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:31:33.518266   68084 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:31:33.518379   68084 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:31:33.518495   68084 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:31:33.518567   68084 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:31:33.518656   68084 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:31:33.888310   68084 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:31:34.000803   68084 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 20:31:34.103016   68084 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:31:34.461677   68084 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:31:34.617814   68084 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:31:34.618316   68084 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:31:34.622440   68084 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:31:34.624324   68084 out.go:235]   - Booting up control plane ...
	I0829 20:31:34.624428   68084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:31:34.624527   68084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:31:34.624882   68084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:31:34.647388   68084 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:31:34.653776   68084 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:31:34.653864   68084 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:31:34.795338   68084 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 20:31:34.795463   68084 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 20:31:35.797126   68084 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001854627s
	I0829 20:31:35.797253   68084 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 20:31:35.852495   66841 pod_ready.go:103] pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:37.344608   66841 pod_ready.go:82] duration metric: took 4m0.000461851s for pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace to be "Ready" ...
	E0829 20:31:37.344637   66841 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-668dg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 20:31:37.344661   66841 pod_ready.go:39] duration metric: took 4m13.033970527s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:31:37.344693   66841 kubeadm.go:597] duration metric: took 4m20.095743839s to restartPrimaryControlPlane
	W0829 20:31:37.344752   66841 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 20:31:37.344780   66841 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:31:40.799092   68084 kubeadm.go:310] [api-check] The API server is healthy after 5.002121632s
	I0829 20:31:40.813865   68084 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 20:31:40.829677   68084 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 20:31:40.870324   68084 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 20:31:40.870598   68084 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-145096 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 20:31:40.889024   68084 kubeadm.go:310] [bootstrap-token] Using token: gy9sl5.6oyya9sd2gbep67e
	I0829 20:31:40.890947   68084 out.go:235]   - Configuring RBAC rules ...
	I0829 20:31:40.891083   68084 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 20:31:40.898748   68084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 20:31:40.912914   68084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 20:31:40.916739   68084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 20:31:40.923995   68084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 20:31:40.930447   68084 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 20:31:41.206632   68084 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 20:31:41.679673   68084 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 20:31:42.206707   68084 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 20:31:42.206733   68084 kubeadm.go:310] 
	I0829 20:31:42.206819   68084 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 20:31:42.206830   68084 kubeadm.go:310] 
	I0829 20:31:42.206974   68084 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 20:31:42.206996   68084 kubeadm.go:310] 
	I0829 20:31:42.207018   68084 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 20:31:42.207073   68084 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 20:31:42.207120   68084 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 20:31:42.207127   68084 kubeadm.go:310] 
	I0829 20:31:42.207189   68084 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 20:31:42.207196   68084 kubeadm.go:310] 
	I0829 20:31:42.207234   68084 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 20:31:42.207238   68084 kubeadm.go:310] 
	I0829 20:31:42.207285   68084 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 20:31:42.207382   68084 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 20:31:42.207473   68084 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 20:31:42.207484   68084 kubeadm.go:310] 
	I0829 20:31:42.207611   68084 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 20:31:42.207727   68084 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 20:31:42.207736   68084 kubeadm.go:310] 
	I0829 20:31:42.207854   68084 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token gy9sl5.6oyya9sd2gbep67e \
	I0829 20:31:42.207962   68084 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef \
	I0829 20:31:42.207983   68084 kubeadm.go:310] 	--control-plane 
	I0829 20:31:42.207986   68084 kubeadm.go:310] 
	I0829 20:31:42.208087   68084 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 20:31:42.208106   68084 kubeadm.go:310] 
	I0829 20:31:42.208214   68084 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token gy9sl5.6oyya9sd2gbep67e \
	I0829 20:31:42.208342   68084 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef 
	I0829 20:31:42.209248   68084 kubeadm.go:310] W0829 20:31:33.349141    2513 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 20:31:42.209595   68084 kubeadm.go:310] W0829 20:31:33.349919    2513 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 20:31:42.209769   68084 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:31:42.209803   68084 cni.go:84] Creating CNI manager for ""
	I0829 20:31:42.209817   68084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:31:42.211545   68084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:31:42.212889   68084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:31:42.223984   68084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:31:42.242703   68084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 20:31:42.242779   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-145096 minikube.k8s.io/updated_at=2024_08_29T20_31_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033 minikube.k8s.io/name=default-k8s-diff-port-145096 minikube.k8s.io/primary=true
	I0829 20:31:42.242779   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:42.448824   68084 ops.go:34] apiserver oom_adj: -16
	I0829 20:31:42.453004   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:42.953891   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:43.453922   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:43.953465   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:44.453647   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:44.954035   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:45.453660   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:45.953536   68084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:31:46.046900   68084 kubeadm.go:1113] duration metric: took 3.804195127s to wait for elevateKubeSystemPrivileges
	I0829 20:31:46.046927   68084 kubeadm.go:394] duration metric: took 4m59.74590678s to StartCluster
	I0829 20:31:46.046947   68084 settings.go:142] acquiring lock: {Name:mka4cd5ddff5796cd0ca11509c181178f4f73529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:31:46.047046   68084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:31:46.048617   68084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:31:46.048876   68084 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.140 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 20:31:46.048979   68084 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 20:31:46.049063   68084 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-145096"
	I0829 20:31:46.049090   68084 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-145096"
	I0829 20:31:46.049090   68084 config.go:182] Loaded profile config "default-k8s-diff-port-145096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:31:46.049099   68084 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-145096"
	I0829 20:31:46.049136   68084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-145096"
	W0829 20:31:46.049143   68084 addons.go:243] addon storage-provisioner should already be in state true
	I0829 20:31:46.049174   68084 host.go:66] Checking if "default-k8s-diff-port-145096" exists ...
	I0829 20:31:46.049104   68084 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-145096"
	I0829 20:31:46.049264   68084 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-145096"
	W0829 20:31:46.049280   68084 addons.go:243] addon metrics-server should already be in state true
	I0829 20:31:46.049335   68084 host.go:66] Checking if "default-k8s-diff-port-145096" exists ...
	I0829 20:31:46.049569   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.049574   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.049595   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.049599   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.049698   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.049722   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.050441   68084 out.go:177] * Verifying Kubernetes components...
	I0829 20:31:46.052039   68084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:31:46.065735   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39367
	I0829 20:31:46.065909   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32931
	I0829 20:31:46.066241   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.066344   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.066900   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.066918   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.067024   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.067045   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.067438   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.067481   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.067665   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:31:46.067902   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.067931   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.069157   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41005
	I0829 20:31:46.070637   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.070757   68084 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-145096"
	W0829 20:31:46.070771   68084 addons.go:243] addon default-storageclass should already be in state true
	I0829 20:31:46.070803   68084 host.go:66] Checking if "default-k8s-diff-port-145096" exists ...
	I0829 20:31:46.071118   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.071124   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.071132   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.071155   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.071510   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.072052   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.072095   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.085524   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39387
	I0829 20:31:46.085987   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.086553   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.086576   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.086966   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.087138   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:31:46.087202   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43235
	I0829 20:31:46.087621   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.088358   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.088381   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.088708   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.088806   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:31:46.089193   68084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:31:46.089363   68084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:31:46.090878   68084 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:31:46.091571   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42413
	I0829 20:31:46.092208   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.092291   68084 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:31:46.092316   68084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 20:31:46.092337   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:31:46.092660   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.092687   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.093044   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.093230   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:31:46.095184   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:31:46.096265   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.096792   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:31:46.096821   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.097088   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:31:46.097274   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:31:46.097433   68084 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 20:31:46.097448   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:31:46.097645   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:31:46.098681   68084 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 20:31:46.098697   68084 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 20:31:46.098715   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:31:46.101604   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.101993   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:31:46.102014   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.102328   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:31:46.102529   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:31:46.102687   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:31:46.102847   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:31:46.108154   68084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32805
	I0829 20:31:46.108627   68084 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:31:46.109111   68084 main.go:141] libmachine: Using API Version  1
	I0829 20:31:46.109129   68084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:31:46.109446   68084 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:31:46.109675   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetState
	I0829 20:31:46.111174   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .DriverName
	I0829 20:31:46.111440   68084 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 20:31:46.111452   68084 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 20:31:46.111469   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHHostname
	I0829 20:31:46.114302   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.114805   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fe:e0", ip: ""} in network mk-default-k8s-diff-port-145096: {Iface:virbr2 ExpiryTime:2024-08-29 21:20:58 +0000 UTC Type:0 Mac:52:54:00:36:fe:e0 Iaid: IPaddr:192.168.72.140 Prefix:24 Hostname:default-k8s-diff-port-145096 Clientid:01:52:54:00:36:fe:e0}
	I0829 20:31:46.114832   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | domain default-k8s-diff-port-145096 has defined IP address 192.168.72.140 and MAC address 52:54:00:36:fe:e0 in network mk-default-k8s-diff-port-145096
	I0829 20:31:46.114921   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHPort
	I0829 20:31:46.115102   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHKeyPath
	I0829 20:31:46.115256   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .GetSSHUsername
	I0829 20:31:46.115400   68084 sshutil.go:53] new ssh client: &{IP:192.168.72.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/default-k8s-diff-port-145096/id_rsa Username:docker}
	I0829 20:31:46.277748   68084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:31:46.297001   68084 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-145096" to be "Ready" ...
	I0829 20:31:46.317473   68084 node_ready.go:49] node "default-k8s-diff-port-145096" has status "Ready":"True"
	I0829 20:31:46.317498   68084 node_ready.go:38] duration metric: took 20.469679ms for node "default-k8s-diff-port-145096" to be "Ready" ...
	I0829 20:31:46.317509   68084 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:31:46.332180   68084 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:46.393588   68084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:31:46.399404   68084 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 20:31:46.399428   68084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 20:31:46.453014   68084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 20:31:46.460100   68084 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 20:31:46.460126   68084 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 20:31:46.541980   68084 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:31:46.542002   68084 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 20:31:46.607148   68084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:31:47.296344   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.296370   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.296445   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.296471   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.296678   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.296722   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.296744   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.296764   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.298376   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:47.298379   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.298404   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.298412   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:47.298420   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.298436   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.298453   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.298464   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.298700   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.298726   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:47.298729   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.318720   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:47.318745   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:47.319031   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:47.319053   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:47.319069   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:47.870171   68084 pod_ready.go:93] pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:31:47.870198   68084 pod_ready.go:82] duration metric: took 1.537994965s for pod "etcd-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:47.870208   68084 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:48.057308   68084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.450120563s)
	I0829 20:31:48.057358   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:48.057371   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:48.057667   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) DBG | Closing plugin on server side
	I0829 20:31:48.057722   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:48.057734   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:48.057747   68084 main.go:141] libmachine: Making call to close driver server
	I0829 20:31:48.057759   68084 main.go:141] libmachine: (default-k8s-diff-port-145096) Calling .Close
	I0829 20:31:48.057989   68084 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:31:48.058005   68084 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:31:48.058021   68084 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-145096"
	I0829 20:31:48.059886   68084 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0829 20:31:48.061124   68084 addons.go:510] duration metric: took 2.012141801s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0829 20:31:48.875874   68084 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:31:48.875897   68084 pod_ready.go:82] duration metric: took 1.005682325s for pod "kube-apiserver-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:48.875912   68084 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:48.879828   68084 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:31:48.879846   68084 pod_ready.go:82] duration metric: took 3.928263ms for pod "kube-controller-manager-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:48.879863   68084 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:50.886764   68084 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:49.922318   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:31:49.922554   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:31:52.887708   68084 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:55.387571   68084 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"False"
	I0829 20:31:55.886194   68084 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace has status "Ready":"True"
	I0829 20:31:55.886217   68084 pod_ready.go:82] duration metric: took 7.006347256s for pod "kube-scheduler-default-k8s-diff-port-145096" in "kube-system" namespace to be "Ready" ...
	I0829 20:31:55.886225   68084 pod_ready.go:39] duration metric: took 9.568704494s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:31:55.886238   68084 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:31:55.886286   68084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:31:55.901604   68084 api_server.go:72] duration metric: took 9.852691692s to wait for apiserver process to appear ...
	I0829 20:31:55.901628   68084 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:31:55.901643   68084 api_server.go:253] Checking apiserver healthz at https://192.168.72.140:8444/healthz ...
	I0829 20:31:55.905564   68084 api_server.go:279] https://192.168.72.140:8444/healthz returned 200:
	ok
	I0829 20:31:55.906387   68084 api_server.go:141] control plane version: v1.31.0
	I0829 20:31:55.906406   68084 api_server.go:131] duration metric: took 4.772472ms to wait for apiserver health ...
	I0829 20:31:55.906413   68084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:31:55.911423   68084 system_pods.go:59] 9 kube-system pods found
	I0829 20:31:55.911444   68084 system_pods.go:61] "coredns-6f6b679f8f-l25kd" [86947930-0d47-407a-b876-b482596fbe8f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:31:55.911451   68084 system_pods.go:61] "coredns-6f6b679f8f-lnm92" [a6caefe0-e883-4460-87de-25ee97191e1a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:31:55.911458   68084 system_pods.go:61] "etcd-default-k8s-diff-port-145096" [caba3f17-6544-4fe0-8dd3-0dd95e8df8ce] Running
	I0829 20:31:55.911465   68084 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-145096" [9b1ca00a-613b-414f-81e9-601d53d43207] Running
	I0829 20:31:55.911470   68084 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-145096" [e7145779-85cf-458d-9870-6fda4853d29d] Running
	I0829 20:31:55.911479   68084 system_pods.go:61] "kube-proxy-ptswc" [96c01414-e8e8-4731-824b-11d636285fb3] Running
	I0829 20:31:55.911488   68084 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-145096" [0d2cc607-72ac-4417-8a7c-196bf3ec90d7] Running
	I0829 20:31:55.911495   68084 system_pods.go:61] "metrics-server-6867b74b74-6sdqg" [2c9efadb-89bb-4aa6-b0f0-ddcb3e931674] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:31:55.911503   68084 system_pods.go:61] "storage-provisioner" [81531989-d045-44fb-b1a1-0817af27c804] Running
	I0829 20:31:55.911512   68084 system_pods.go:74] duration metric: took 5.092824ms to wait for pod list to return data ...
	I0829 20:31:55.911523   68084 default_sa.go:34] waiting for default service account to be created ...
	I0829 20:31:55.913794   68084 default_sa.go:45] found service account: "default"
	I0829 20:31:55.913820   68084 default_sa.go:55] duration metric: took 2.286925ms for default service account to be created ...
	I0829 20:31:55.913830   68084 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 20:31:55.919628   68084 system_pods.go:86] 9 kube-system pods found
	I0829 20:31:55.919666   68084 system_pods.go:89] "coredns-6f6b679f8f-l25kd" [86947930-0d47-407a-b876-b482596fbe8f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:31:55.919677   68084 system_pods.go:89] "coredns-6f6b679f8f-lnm92" [a6caefe0-e883-4460-87de-25ee97191e1a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 20:31:55.919686   68084 system_pods.go:89] "etcd-default-k8s-diff-port-145096" [caba3f17-6544-4fe0-8dd3-0dd95e8df8ce] Running
	I0829 20:31:55.919693   68084 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-145096" [9b1ca00a-613b-414f-81e9-601d53d43207] Running
	I0829 20:31:55.919699   68084 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-145096" [e7145779-85cf-458d-9870-6fda4853d29d] Running
	I0829 20:31:55.919704   68084 system_pods.go:89] "kube-proxy-ptswc" [96c01414-e8e8-4731-824b-11d636285fb3] Running
	I0829 20:31:55.919710   68084 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-145096" [0d2cc607-72ac-4417-8a7c-196bf3ec90d7] Running
	I0829 20:31:55.919718   68084 system_pods.go:89] "metrics-server-6867b74b74-6sdqg" [2c9efadb-89bb-4aa6-b0f0-ddcb3e931674] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:31:55.919725   68084 system_pods.go:89] "storage-provisioner" [81531989-d045-44fb-b1a1-0817af27c804] Running
	I0829 20:31:55.919734   68084 system_pods.go:126] duration metric: took 5.897752ms to wait for k8s-apps to be running ...
	I0829 20:31:55.919745   68084 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 20:31:55.919800   68084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:31:55.935429   68084 system_svc.go:56] duration metric: took 15.676316ms WaitForService to wait for kubelet
	I0829 20:31:55.935460   68084 kubeadm.go:582] duration metric: took 9.886551311s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:31:55.935483   68084 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:31:55.938444   68084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:31:55.938466   68084 node_conditions.go:123] node cpu capacity is 2
	I0829 20:31:55.938476   68084 node_conditions.go:105] duration metric: took 2.988434ms to run NodePressure ...
	I0829 20:31:55.938486   68084 start.go:241] waiting for startup goroutines ...
	I0829 20:31:55.938493   68084 start.go:246] waiting for cluster config update ...
	I0829 20:31:55.938503   68084 start.go:255] writing updated cluster config ...
	I0829 20:31:55.938834   68084 ssh_runner.go:195] Run: rm -f paused
	I0829 20:31:55.987879   68084 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 20:31:55.989766   68084 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-145096" cluster and "default" namespace by default
	I0829 20:32:03.506190   66841 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.161387814s)
	I0829 20:32:03.506268   66841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:32:03.530660   66841 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 20:32:03.550784   66841 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:32:03.565054   66841 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:32:03.565085   66841 kubeadm.go:157] found existing configuration files:
	
	I0829 20:32:03.565131   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:32:03.586492   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:32:03.586577   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:32:03.605061   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:32:03.617990   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:32:03.618054   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:32:03.635587   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:32:03.645495   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:32:03.645559   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:32:03.655081   66841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:32:03.664640   66841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:32:03.664703   66841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:32:03.674097   66841 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:32:03.721087   66841 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 20:32:03.721155   66841 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:32:03.839829   66841 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:32:03.839985   66841 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:32:03.840079   66841 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 20:32:03.849047   66841 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:32:03.850883   66841 out.go:235]   - Generating certificates and keys ...
	I0829 20:32:03.850970   66841 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:32:03.851045   66841 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:32:03.851129   66841 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:32:03.851222   66841 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:32:03.851292   66841 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:32:03.851340   66841 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:32:03.851399   66841 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:32:03.851450   66841 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:32:03.851515   66841 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:32:03.851620   66841 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:32:03.851687   66841 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:32:03.851755   66841 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:32:03.968189   66841 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:32:04.253016   66841 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 20:32:04.341190   66841 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:32:04.491607   66841 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:32:04.616753   66841 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:32:04.617354   66841 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:32:04.619961   66841 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:32:04.621690   66841 out.go:235]   - Booting up control plane ...
	I0829 20:32:04.621799   66841 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:32:04.621910   66841 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:32:04.622021   66841 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:32:04.643758   66841 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:32:04.650541   66841 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:32:04.650612   66841 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:32:04.786596   66841 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 20:32:04.786755   66841 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 20:32:05.788381   66841 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001614523s
	I0829 20:32:05.788512   66841 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 20:32:10.789752   66841 kubeadm.go:310] [api-check] The API server is healthy after 5.001571241s
	I0829 20:32:10.803237   66841 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 20:32:10.822640   66841 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 20:32:10.845744   66841 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 20:32:10.846050   66841 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-397724 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 20:32:10.856315   66841 kubeadm.go:310] [bootstrap-token] Using token: 3k2s43.7gy6mzkt91kkied7
	I0829 20:32:10.857834   66841 out.go:235]   - Configuring RBAC rules ...
	I0829 20:32:10.857947   66841 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 20:32:10.867339   66841 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 20:32:10.876522   66841 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 20:32:10.879786   66841 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 20:32:10.885043   66841 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 20:32:10.892077   66841 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 20:32:11.196796   66841 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 20:32:11.630072   66841 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 20:32:12.200197   66841 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 20:32:12.200232   66841 kubeadm.go:310] 
	I0829 20:32:12.200314   66841 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 20:32:12.200326   66841 kubeadm.go:310] 
	I0829 20:32:12.200406   66841 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 20:32:12.200416   66841 kubeadm.go:310] 
	I0829 20:32:12.200450   66841 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 20:32:12.200536   66841 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 20:32:12.200606   66841 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 20:32:12.200616   66841 kubeadm.go:310] 
	I0829 20:32:12.200687   66841 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 20:32:12.200700   66841 kubeadm.go:310] 
	I0829 20:32:12.200744   66841 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 20:32:12.200750   66841 kubeadm.go:310] 
	I0829 20:32:12.200793   66841 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 20:32:12.200861   66841 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 20:32:12.200918   66841 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 20:32:12.200924   66841 kubeadm.go:310] 
	I0829 20:32:12.201048   66841 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 20:32:12.201144   66841 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 20:32:12.201152   66841 kubeadm.go:310] 
	I0829 20:32:12.201255   66841 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3k2s43.7gy6mzkt91kkied7 \
	I0829 20:32:12.201373   66841 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef \
	I0829 20:32:12.201400   66841 kubeadm.go:310] 	--control-plane 
	I0829 20:32:12.201411   66841 kubeadm.go:310] 
	I0829 20:32:12.201487   66841 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 20:32:12.201495   66841 kubeadm.go:310] 
	I0829 20:32:12.201574   66841 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3k2s43.7gy6mzkt91kkied7 \
	I0829 20:32:12.201710   66841 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c58f366b4ed2764d10805fd53157f06856e7a7b54161999850a1b1d7ac45d9ef 
	I0829 20:32:12.202900   66841 kubeadm.go:310] W0829 20:32:03.691334    3057 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 20:32:12.203223   66841 kubeadm.go:310] W0829 20:32:03.692151    3057 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 20:32:12.203339   66841 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:32:12.203366   66841 cni.go:84] Creating CNI manager for ""
	I0829 20:32:12.203381   66841 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 20:32:12.205733   66841 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 20:32:12.206905   66841 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 20:32:12.218121   66841 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 20:32:12.237885   66841 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 20:32:12.237989   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:12.238006   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-397724 minikube.k8s.io/updated_at=2024_08_29T20_32_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033 minikube.k8s.io/name=no-preload-397724 minikube.k8s.io/primary=true
	I0829 20:32:12.282191   66841 ops.go:34] apiserver oom_adj: -16
	I0829 20:32:12.430006   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:12.930327   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:13.430210   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:13.930065   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:14.430163   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:14.930189   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:15.430677   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:15.930670   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:16.430943   66841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 20:32:16.549095   66841 kubeadm.go:1113] duration metric: took 4.311165714s to wait for elevateKubeSystemPrivileges
	I0829 20:32:16.549136   66841 kubeadm.go:394] duration metric: took 4m59.355577107s to StartCluster
	I0829 20:32:16.549156   66841 settings.go:142] acquiring lock: {Name:mka4cd5ddff5796cd0ca11509c181178f4f73529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:32:16.549229   66841 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:32:16.550926   66841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-11185/kubeconfig: {Name:mkf2973f4aeee1a0b1b095b363141903a62f4cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 20:32:16.551141   66841 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 20:32:16.551202   66841 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 20:32:16.551291   66841 addons.go:69] Setting storage-provisioner=true in profile "no-preload-397724"
	I0829 20:32:16.551315   66841 addons.go:69] Setting default-storageclass=true in profile "no-preload-397724"
	I0829 20:32:16.551329   66841 config.go:182] Loaded profile config "no-preload-397724": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:32:16.551340   66841 addons.go:69] Setting metrics-server=true in profile "no-preload-397724"
	I0829 20:32:16.551389   66841 addons.go:234] Setting addon metrics-server=true in "no-preload-397724"
	W0829 20:32:16.551404   66841 addons.go:243] addon metrics-server should already be in state true
	I0829 20:32:16.551442   66841 host.go:66] Checking if "no-preload-397724" exists ...
	I0829 20:32:16.551360   66841 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-397724"
	I0829 20:32:16.551324   66841 addons.go:234] Setting addon storage-provisioner=true in "no-preload-397724"
	W0829 20:32:16.551673   66841 addons.go:243] addon storage-provisioner should already be in state true
	I0829 20:32:16.551705   66841 host.go:66] Checking if "no-preload-397724" exists ...
	I0829 20:32:16.551872   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.551873   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.551908   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.551929   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.552036   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.552065   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.552634   66841 out.go:177] * Verifying Kubernetes components...
	I0829 20:32:16.553973   66841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 20:32:16.567797   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43335
	I0829 20:32:16.568321   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.568884   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.568910   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.569328   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.569941   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.569978   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.573055   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40673
	I0829 20:32:16.573642   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36399
	I0829 20:32:16.573770   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.574303   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.574321   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.574394   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.574913   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.574933   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.574935   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.575471   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.575511   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.575724   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.575950   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:32:16.579912   66841 addons.go:234] Setting addon default-storageclass=true in "no-preload-397724"
	W0829 20:32:16.579932   66841 addons.go:243] addon default-storageclass should already be in state true
	I0829 20:32:16.579960   66841 host.go:66] Checking if "no-preload-397724" exists ...
	I0829 20:32:16.580281   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.580298   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.591264   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42469
	I0829 20:32:16.591442   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42753
	I0829 20:32:16.591777   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.591827   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.592275   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.592289   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.592289   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.592307   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.592702   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.592726   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.592881   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:32:16.592882   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:32:16.594494   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:32:16.594956   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:32:16.596431   66841 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 20:32:16.596433   66841 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 20:32:16.597503   66841 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 20:32:16.597524   66841 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 20:32:16.597547   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:32:16.597607   66841 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:32:16.597625   66841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 20:32:16.597641   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:32:16.598780   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32841
	I0829 20:32:16.599272   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.599915   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.599937   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.601210   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.601613   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.601965   66841 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19530-11185/.minikube/bin/docker-machine-driver-kvm2
	I0829 20:32:16.602159   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:32:16.602190   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.602328   66841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 20:32:16.602867   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.602998   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:32:16.603188   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:32:16.603234   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:32:16.603287   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.603434   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:32:16.603487   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:32:16.603691   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:32:16.603708   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:32:16.603857   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:32:16.603977   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:32:16.619336   66841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37683
	I0829 20:32:16.619806   66841 main.go:141] libmachine: () Calling .GetVersion
	I0829 20:32:16.620269   66841 main.go:141] libmachine: Using API Version  1
	I0829 20:32:16.620286   66841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 20:32:16.620604   66841 main.go:141] libmachine: () Calling .GetMachineName
	I0829 20:32:16.620818   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetState
	I0829 20:32:16.622348   66841 main.go:141] libmachine: (no-preload-397724) Calling .DriverName
	I0829 20:32:16.622563   66841 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 20:32:16.622580   66841 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 20:32:16.622597   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHHostname
	I0829 20:32:16.625203   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.625542   66841 main.go:141] libmachine: (no-preload-397724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:bf:ac", ip: ""} in network mk-no-preload-397724: {Iface:virbr3 ExpiryTime:2024-08-29 21:26:52 +0000 UTC Type:0 Mac:52:54:00:e9:bf:ac Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:no-preload-397724 Clientid:01:52:54:00:e9:bf:ac}
	I0829 20:32:16.625570   66841 main.go:141] libmachine: (no-preload-397724) DBG | domain no-preload-397724 has defined IP address 192.168.50.214 and MAC address 52:54:00:e9:bf:ac in network mk-no-preload-397724
	I0829 20:32:16.625746   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHPort
	I0829 20:32:16.625934   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHKeyPath
	I0829 20:32:16.626094   66841 main.go:141] libmachine: (no-preload-397724) Calling .GetSSHUsername
	I0829 20:32:16.626266   66841 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/no-preload-397724/id_rsa Username:docker}
	I0829 20:32:16.787525   66841 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 20:32:16.817674   66841 node_ready.go:35] waiting up to 6m0s for node "no-preload-397724" to be "Ready" ...
	I0829 20:32:16.833992   66841 node_ready.go:49] node "no-preload-397724" has status "Ready":"True"
	I0829 20:32:16.834030   66841 node_ready.go:38] duration metric: took 16.322874ms for node "no-preload-397724" to be "Ready" ...
	I0829 20:32:16.834042   66841 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:32:16.843147   66841 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-crgtj" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:16.902589   66841 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 20:32:16.902613   66841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 20:32:16.902859   66841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 20:32:16.903193   66841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 20:32:16.922497   66841 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 20:32:16.922518   66841 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 20:32:16.966207   66841 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:32:16.966240   66841 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 20:32:17.004882   66841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 20:32:17.204576   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.204613   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.204968   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.204987   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:17.204995   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.204994   66841 main.go:141] libmachine: (no-preload-397724) DBG | Closing plugin on server side
	I0829 20:32:17.205002   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.205261   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.205278   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:17.211789   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.211811   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.212074   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.212089   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:17.212119   66841 main.go:141] libmachine: (no-preload-397724) DBG | Closing plugin on server side
	I0829 20:32:17.902866   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.902897   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.903218   66841 main.go:141] libmachine: (no-preload-397724) DBG | Closing plugin on server side
	I0829 20:32:17.903266   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.903278   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:17.903286   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:17.903296   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:17.903556   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:17.903572   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:18.344211   66841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.33928059s)
	I0829 20:32:18.344259   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:18.344274   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:18.344571   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:18.344589   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:18.344611   66841 main.go:141] libmachine: Making call to close driver server
	I0829 20:32:18.344626   66841 main.go:141] libmachine: (no-preload-397724) Calling .Close
	I0829 20:32:18.344948   66841 main.go:141] libmachine: (no-preload-397724) DBG | Closing plugin on server side
	I0829 20:32:18.344980   66841 main.go:141] libmachine: Successfully made call to close driver server
	I0829 20:32:18.345010   66841 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 20:32:18.345025   66841 addons.go:475] Verifying addon metrics-server=true in "no-preload-397724"
	I0829 20:32:18.346919   66841 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0829 20:32:18.348704   66841 addons.go:510] duration metric: took 1.797503952s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0829 20:32:18.850832   66841 pod_ready.go:93] pod "coredns-6f6b679f8f-crgtj" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:18.850853   66841 pod_ready.go:82] duration metric: took 2.007683093s for pod "coredns-6f6b679f8f-crgtj" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:18.850862   66841 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dw2r7" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.357679   66841 pod_ready.go:93] pod "coredns-6f6b679f8f-dw2r7" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.357702   66841 pod_ready.go:82] duration metric: took 1.506832539s for pod "coredns-6f6b679f8f-dw2r7" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.357710   66841 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.361830   66841 pod_ready.go:93] pod "etcd-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.361854   66841 pod_ready.go:82] duration metric: took 4.136801ms for pod "etcd-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.361865   66841 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.365719   66841 pod_ready.go:93] pod "kube-apiserver-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.365733   66841 pod_ready.go:82] duration metric: took 3.861894ms for pod "kube-apiserver-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.365741   66841 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.369596   66841 pod_ready.go:93] pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.369611   66841 pod_ready.go:82] duration metric: took 3.864669ms for pod "kube-controller-manager-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.369619   66841 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f4x4j" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.447788   66841 pod_ready.go:93] pod "kube-proxy-f4x4j" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:20.447812   66841 pod_ready.go:82] duration metric: took 78.187574ms for pod "kube-proxy-f4x4j" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:20.447823   66841 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:22.049084   66841 pod_ready.go:93] pod "kube-scheduler-no-preload-397724" in "kube-system" namespace has status "Ready":"True"
	I0829 20:32:22.049105   66841 pod_ready.go:82] duration metric: took 1.601276793s for pod "kube-scheduler-no-preload-397724" in "kube-system" namespace to be "Ready" ...
	I0829 20:32:22.049113   66841 pod_ready.go:39] duration metric: took 5.215058301s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 20:32:22.049125   66841 api_server.go:52] waiting for apiserver process to appear ...
	I0829 20:32:22.049172   66841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 20:32:22.066060   66841 api_server.go:72] duration metric: took 5.514888299s to wait for apiserver process to appear ...
	I0829 20:32:22.066086   66841 api_server.go:88] waiting for apiserver healthz status ...
	I0829 20:32:22.066109   66841 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I0829 20:32:22.072343   66841 api_server.go:279] https://192.168.50.214:8443/healthz returned 200:
	ok
	I0829 20:32:22.073798   66841 api_server.go:141] control plane version: v1.31.0
	I0829 20:32:22.073821   66841 api_server.go:131] duration metric: took 7.728095ms to wait for apiserver health ...
	I0829 20:32:22.073828   66841 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 20:32:22.252273   66841 system_pods.go:59] 9 kube-system pods found
	I0829 20:32:22.252302   66841 system_pods.go:61] "coredns-6f6b679f8f-crgtj" [c48571a8-18ae-4737-a05b-4a77736aee35] Running
	I0829 20:32:22.252309   66841 system_pods.go:61] "coredns-6f6b679f8f-dw2r7" [6edda799-e2d6-402b-b4cd-7e54b2b89ca5] Running
	I0829 20:32:22.252315   66841 system_pods.go:61] "etcd-no-preload-397724" [15473208-a76c-4bc5-810f-e78d59538493] Running
	I0829 20:32:22.252320   66841 system_pods.go:61] "kube-apiserver-no-preload-397724" [521c6041-888f-4145-aabb-54da7382953d] Running
	I0829 20:32:22.252325   66841 system_pods.go:61] "kube-controller-manager-no-preload-397724" [fd5afaf8-898d-4985-8efc-5628709a52cd] Running
	I0829 20:32:22.252329   66841 system_pods.go:61] "kube-proxy-f4x4j" [eb76dc5a-016a-416c-8880-f76fc2d2a9bb] Running
	I0829 20:32:22.252333   66841 system_pods.go:61] "kube-scheduler-no-preload-397724" [77d9e2de-ee8e-4cb2-a7f0-5d9b96bd9691] Running
	I0829 20:32:22.252342   66841 system_pods.go:61] "metrics-server-6867b74b74-nxdc5" [6061e81d-2f14-4c4a-9e0f-acb57dc9fb5a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:32:22.252348   66841 system_pods.go:61] "storage-provisioner" [8b6c02d6-7a39-4fea-80b4-4ba02904232c] Running
	I0829 20:32:22.252358   66841 system_pods.go:74] duration metric: took 178.523887ms to wait for pod list to return data ...
	I0829 20:32:22.252370   66841 default_sa.go:34] waiting for default service account to be created ...
	I0829 20:32:22.448475   66841 default_sa.go:45] found service account: "default"
	I0829 20:32:22.448499   66841 default_sa.go:55] duration metric: took 196.123693ms for default service account to be created ...
	I0829 20:32:22.448508   66841 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 20:32:22.650996   66841 system_pods.go:86] 9 kube-system pods found
	I0829 20:32:22.651023   66841 system_pods.go:89] "coredns-6f6b679f8f-crgtj" [c48571a8-18ae-4737-a05b-4a77736aee35] Running
	I0829 20:32:22.651029   66841 system_pods.go:89] "coredns-6f6b679f8f-dw2r7" [6edda799-e2d6-402b-b4cd-7e54b2b89ca5] Running
	I0829 20:32:22.651033   66841 system_pods.go:89] "etcd-no-preload-397724" [15473208-a76c-4bc5-810f-e78d59538493] Running
	I0829 20:32:22.651037   66841 system_pods.go:89] "kube-apiserver-no-preload-397724" [521c6041-888f-4145-aabb-54da7382953d] Running
	I0829 20:32:22.651042   66841 system_pods.go:89] "kube-controller-manager-no-preload-397724" [fd5afaf8-898d-4985-8efc-5628709a52cd] Running
	I0829 20:32:22.651045   66841 system_pods.go:89] "kube-proxy-f4x4j" [eb76dc5a-016a-416c-8880-f76fc2d2a9bb] Running
	I0829 20:32:22.651048   66841 system_pods.go:89] "kube-scheduler-no-preload-397724" [77d9e2de-ee8e-4cb2-a7f0-5d9b96bd9691] Running
	I0829 20:32:22.651054   66841 system_pods.go:89] "metrics-server-6867b74b74-nxdc5" [6061e81d-2f14-4c4a-9e0f-acb57dc9fb5a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 20:32:22.651058   66841 system_pods.go:89] "storage-provisioner" [8b6c02d6-7a39-4fea-80b4-4ba02904232c] Running
	I0829 20:32:22.651065   66841 system_pods.go:126] duration metric: took 202.552304ms to wait for k8s-apps to be running ...
	I0829 20:32:22.651071   66841 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 20:32:22.651111   66841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:32:22.666831   66841 system_svc.go:56] duration metric: took 15.753046ms WaitForService to wait for kubelet
	I0829 20:32:22.666863   66841 kubeadm.go:582] duration metric: took 6.115692499s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 20:32:22.666888   66841 node_conditions.go:102] verifying NodePressure condition ...
	I0829 20:32:22.848742   66841 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 20:32:22.848766   66841 node_conditions.go:123] node cpu capacity is 2
	I0829 20:32:22.848777   66841 node_conditions.go:105] duration metric: took 181.884368ms to run NodePressure ...
	I0829 20:32:22.848787   66841 start.go:241] waiting for startup goroutines ...
	I0829 20:32:22.848794   66841 start.go:246] waiting for cluster config update ...
	I0829 20:32:22.848803   66841 start.go:255] writing updated cluster config ...
	I0829 20:32:22.849030   66841 ssh_runner.go:195] Run: rm -f paused
	I0829 20:32:22.897503   66841 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 20:32:22.899404   66841 out.go:177] * Done! kubectl is now configured to use "no-preload-397724" cluster and "default" namespace by default
	I0829 20:32:29.924469   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:32:29.924707   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:32:29.924729   67607 kubeadm.go:310] 
	I0829 20:32:29.924801   67607 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 20:32:29.924855   67607 kubeadm.go:310] 		timed out waiting for the condition
	I0829 20:32:29.924865   67607 kubeadm.go:310] 
	I0829 20:32:29.924912   67607 kubeadm.go:310] 	This error is likely caused by:
	I0829 20:32:29.924960   67607 kubeadm.go:310] 		- The kubelet is not running
	I0829 20:32:29.925080   67607 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 20:32:29.925090   67607 kubeadm.go:310] 
	I0829 20:32:29.925207   67607 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 20:32:29.925256   67607 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 20:32:29.925316   67607 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 20:32:29.925342   67607 kubeadm.go:310] 
	I0829 20:32:29.925493   67607 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 20:32:29.925616   67607 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 20:32:29.925627   67607 kubeadm.go:310] 
	I0829 20:32:29.925776   67607 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 20:32:29.925909   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 20:32:29.926016   67607 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 20:32:29.926134   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 20:32:29.926154   67607 kubeadm.go:310] 
	I0829 20:32:29.926605   67607 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:32:29.926723   67607 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 20:32:29.926812   67607 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0829 20:32:29.926935   67607 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0829 20:32:29.926979   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 20:32:30.389951   67607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 20:32:30.408455   67607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 20:32:30.418493   67607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 20:32:30.418513   67607 kubeadm.go:157] found existing configuration files:
	
	I0829 20:32:30.418582   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 20:32:30.427909   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 20:32:30.427957   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 20:32:30.437122   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 20:32:30.446157   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 20:32:30.446203   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 20:32:30.455480   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 20:32:30.464781   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 20:32:30.464834   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 20:32:30.474607   67607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 20:32:30.484537   67607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 20:32:30.484601   67607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 20:32:30.494170   67607 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 20:32:30.717349   67607 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 20:34:26.784436   67607 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 20:34:26.784518   67607 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0829 20:34:26.786158   67607 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 20:34:26.786196   67607 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 20:34:26.786276   67607 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 20:34:26.786353   67607 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 20:34:26.786437   67607 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 20:34:26.786486   67607 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 20:34:26.788271   67607 out.go:235]   - Generating certificates and keys ...
	I0829 20:34:26.788380   67607 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 20:34:26.788453   67607 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 20:34:26.788523   67607 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 20:34:26.788593   67607 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 20:34:26.788665   67607 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 20:34:26.788714   67607 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 20:34:26.788769   67607 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 20:34:26.788826   67607 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 20:34:26.788894   67607 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 20:34:26.788961   67607 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 20:34:26.788993   67607 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 20:34:26.789044   67607 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 20:34:26.789084   67607 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 20:34:26.789143   67607 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 20:34:26.789228   67607 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 20:34:26.789312   67607 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 20:34:26.789441   67607 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 20:34:26.789577   67607 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 20:34:26.789647   67607 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 20:34:26.789717   67607 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 20:34:26.791166   67607 out.go:235]   - Booting up control plane ...
	I0829 20:34:26.791239   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 20:34:26.791305   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 20:34:26.791382   67607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 20:34:26.791465   67607 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 20:34:26.791597   67607 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 20:34:26.791658   67607 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 20:34:26.791736   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.791926   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792008   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.792182   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792254   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.792435   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792492   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.792725   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.792798   67607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 20:34:26.793026   67607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 20:34:26.793043   67607 kubeadm.go:310] 
	I0829 20:34:26.793091   67607 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 20:34:26.793148   67607 kubeadm.go:310] 		timed out waiting for the condition
	I0829 20:34:26.793159   67607 kubeadm.go:310] 
	I0829 20:34:26.793188   67607 kubeadm.go:310] 	This error is likely caused by:
	I0829 20:34:26.793219   67607 kubeadm.go:310] 		- The kubelet is not running
	I0829 20:34:26.793305   67607 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 20:34:26.793314   67607 kubeadm.go:310] 
	I0829 20:34:26.793438   67607 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 20:34:26.793483   67607 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 20:34:26.793515   67607 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 20:34:26.793522   67607 kubeadm.go:310] 
	I0829 20:34:26.793618   67607 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 20:34:26.793735   67607 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 20:34:26.793748   67607 kubeadm.go:310] 
	I0829 20:34:26.793895   67607 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 20:34:26.794020   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 20:34:26.794125   67607 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 20:34:26.794227   67607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 20:34:26.794285   67607 kubeadm.go:310] 
	I0829 20:34:26.794300   67607 kubeadm.go:394] duration metric: took 7m57.183485424s to StartCluster
	I0829 20:34:26.794357   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 20:34:26.794410   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 20:34:26.837033   67607 cri.go:89] found id: ""
	I0829 20:34:26.837072   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.837083   67607 logs.go:278] No container was found matching "kube-apiserver"
	I0829 20:34:26.837091   67607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 20:34:26.837153   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 20:34:26.871177   67607 cri.go:89] found id: ""
	I0829 20:34:26.871203   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.871213   67607 logs.go:278] No container was found matching "etcd"
	I0829 20:34:26.871220   67607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 20:34:26.871280   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 20:34:26.905409   67607 cri.go:89] found id: ""
	I0829 20:34:26.905432   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.905442   67607 logs.go:278] No container was found matching "coredns"
	I0829 20:34:26.905450   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 20:34:26.905509   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 20:34:26.940119   67607 cri.go:89] found id: ""
	I0829 20:34:26.940150   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.940161   67607 logs.go:278] No container was found matching "kube-scheduler"
	I0829 20:34:26.940169   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 20:34:26.940217   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 20:34:26.974555   67607 cri.go:89] found id: ""
	I0829 20:34:26.974589   67607 logs.go:276] 0 containers: []
	W0829 20:34:26.974601   67607 logs.go:278] No container was found matching "kube-proxy"
	I0829 20:34:26.974608   67607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 20:34:26.974674   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 20:34:27.010586   67607 cri.go:89] found id: ""
	I0829 20:34:27.010616   67607 logs.go:276] 0 containers: []
	W0829 20:34:27.010631   67607 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 20:34:27.010639   67607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 20:34:27.010704   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 20:34:27.044867   67607 cri.go:89] found id: ""
	I0829 20:34:27.044900   67607 logs.go:276] 0 containers: []
	W0829 20:34:27.044913   67607 logs.go:278] No container was found matching "kindnet"
	I0829 20:34:27.044921   67607 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 20:34:27.044979   67607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 20:34:27.079282   67607 cri.go:89] found id: ""
	I0829 20:34:27.079308   67607 logs.go:276] 0 containers: []
	W0829 20:34:27.079316   67607 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 20:34:27.079323   67607 logs.go:123] Gathering logs for dmesg ...
	I0829 20:34:27.079335   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 20:34:27.093455   67607 logs.go:123] Gathering logs for describe nodes ...
	I0829 20:34:27.093485   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 20:34:27.179256   67607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 20:34:27.179280   67607 logs.go:123] Gathering logs for CRI-O ...
	I0829 20:34:27.179292   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 20:34:27.305873   67607 logs.go:123] Gathering logs for container status ...
	I0829 20:34:27.305906   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 20:34:27.349676   67607 logs.go:123] Gathering logs for kubelet ...
	I0829 20:34:27.349702   67607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 20:34:27.399787   67607 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0829 20:34:27.399851   67607 out.go:270] * 
	W0829 20:34:27.399907   67607 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 20:34:27.399919   67607 out.go:270] * 
	W0829 20:34:27.400631   67607 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 20:34:27.403773   67607 out.go:201] 
	W0829 20:34:27.404902   67607 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 20:34:27.404953   67607 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0829 20:34:27.404981   67607 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0829 20:34:27.406310   67607 out.go:201] 
	
	
	==> CRI-O <==
	Aug 29 20:46:10 old-k8s-version-032002 crio[630]: time="2024-08-29 20:46:10.220890133Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964370220812859,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8edee56d-5cc6-47c2-bb46-490f7dc45b37 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:46:10 old-k8s-version-032002 crio[630]: time="2024-08-29 20:46:10.221608135Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fec27d2c-c2ed-424d-84ce-281c8dbbcaf8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:10 old-k8s-version-032002 crio[630]: time="2024-08-29 20:46:10.221708422Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fec27d2c-c2ed-424d-84ce-281c8dbbcaf8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:10 old-k8s-version-032002 crio[630]: time="2024-08-29 20:46:10.221758379Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fec27d2c-c2ed-424d-84ce-281c8dbbcaf8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:10 old-k8s-version-032002 crio[630]: time="2024-08-29 20:46:10.254096410Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=86ee4063-aa1c-48c3-84a3-0b1ef5207ab0 name=/runtime.v1.RuntimeService/Version
	Aug 29 20:46:10 old-k8s-version-032002 crio[630]: time="2024-08-29 20:46:10.254199765Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=86ee4063-aa1c-48c3-84a3-0b1ef5207ab0 name=/runtime.v1.RuntimeService/Version
	Aug 29 20:46:10 old-k8s-version-032002 crio[630]: time="2024-08-29 20:46:10.255131208Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bae6b6da-064b-4062-98d6-9df16064e860 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:46:10 old-k8s-version-032002 crio[630]: time="2024-08-29 20:46:10.255503744Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964370255483946,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bae6b6da-064b-4062-98d6-9df16064e860 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:46:10 old-k8s-version-032002 crio[630]: time="2024-08-29 20:46:10.256035922Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ffddce3-3cd4-4c5a-a0e1-7142b85598a5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:10 old-k8s-version-032002 crio[630]: time="2024-08-29 20:46:10.256083924Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ffddce3-3cd4-4c5a-a0e1-7142b85598a5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:10 old-k8s-version-032002 crio[630]: time="2024-08-29 20:46:10.256121594Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6ffddce3-3cd4-4c5a-a0e1-7142b85598a5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:10 old-k8s-version-032002 crio[630]: time="2024-08-29 20:46:10.293541742Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2d73abad-29fa-4007-8adb-fdfeeff47e14 name=/runtime.v1.RuntimeService/Version
	Aug 29 20:46:10 old-k8s-version-032002 crio[630]: time="2024-08-29 20:46:10.293623014Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d73abad-29fa-4007-8adb-fdfeeff47e14 name=/runtime.v1.RuntimeService/Version
	Aug 29 20:46:10 old-k8s-version-032002 crio[630]: time="2024-08-29 20:46:10.295049257Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af601c25-ab08-4999-8aa0-5e4d507a41a8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:46:10 old-k8s-version-032002 crio[630]: time="2024-08-29 20:46:10.295444384Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964370295417124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af601c25-ab08-4999-8aa0-5e4d507a41a8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:46:10 old-k8s-version-032002 crio[630]: time="2024-08-29 20:46:10.296154357Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6066d571-266f-4cb9-bc9d-cda813e7ca6a name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:10 old-k8s-version-032002 crio[630]: time="2024-08-29 20:46:10.296209284Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6066d571-266f-4cb9-bc9d-cda813e7ca6a name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:10 old-k8s-version-032002 crio[630]: time="2024-08-29 20:46:10.296241369Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6066d571-266f-4cb9-bc9d-cda813e7ca6a name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:10 old-k8s-version-032002 crio[630]: time="2024-08-29 20:46:10.337241794Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e20e3ef2-ada3-408d-8255-16ca5a7300ea name=/runtime.v1.RuntimeService/Version
	Aug 29 20:46:10 old-k8s-version-032002 crio[630]: time="2024-08-29 20:46:10.337349692Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e20e3ef2-ada3-408d-8255-16ca5a7300ea name=/runtime.v1.RuntimeService/Version
	Aug 29 20:46:10 old-k8s-version-032002 crio[630]: time="2024-08-29 20:46:10.339054872Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=18ed0b59-8512-4628-9406-be4c216a1dfa name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:46:10 old-k8s-version-032002 crio[630]: time="2024-08-29 20:46:10.339472666Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724964370339450383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=18ed0b59-8512-4628-9406-be4c216a1dfa name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 20:46:10 old-k8s-version-032002 crio[630]: time="2024-08-29 20:46:10.340294870Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f050803d-84b7-46b5-a28c-62c6429c145d name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:10 old-k8s-version-032002 crio[630]: time="2024-08-29 20:46:10.340366314Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f050803d-84b7-46b5-a28c-62c6429c145d name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 20:46:10 old-k8s-version-032002 crio[630]: time="2024-08-29 20:46:10.340401400Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f050803d-84b7-46b5-a28c-62c6429c145d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug29 20:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053894] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042317] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.920296] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.442854] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.576675] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.694150] systemd-fstab-generator[556]: Ignoring "noauto" option for root device
	[  +0.062526] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052165] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.177300] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.162237] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.253464] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +6.389299] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.063933] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.901932] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[ +13.592201] kauditd_printk_skb: 46 callbacks suppressed
	[Aug29 20:30] systemd-fstab-generator[5044]: Ignoring "noauto" option for root device
	[Aug29 20:32] systemd-fstab-generator[5320]: Ignoring "noauto" option for root device
	[  +0.064706] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:46:10 up 20 min,  0 users,  load average: 0.00, 0.00, 0.03
	Linux old-k8s-version-032002 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 29 20:46:08 old-k8s-version-032002 kubelet[6834]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc000a7db00)
	Aug 29 20:46:08 old-k8s-version-032002 kubelet[6834]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Aug 29 20:46:08 old-k8s-version-032002 kubelet[6834]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Aug 29 20:46:08 old-k8s-version-032002 kubelet[6834]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Aug 29 20:46:08 old-k8s-version-032002 kubelet[6834]: goroutine 158 [select]:
	Aug 29 20:46:08 old-k8s-version-032002 kubelet[6834]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c55ef0, 0x4f0ac20, 0xc000b94c80, 0x1, 0xc00009e0c0)
	Aug 29 20:46:08 old-k8s-version-032002 kubelet[6834]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Aug 29 20:46:08 old-k8s-version-032002 kubelet[6834]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0001de700, 0xc00009e0c0)
	Aug 29 20:46:08 old-k8s-version-032002 kubelet[6834]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Aug 29 20:46:08 old-k8s-version-032002 kubelet[6834]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Aug 29 20:46:08 old-k8s-version-032002 kubelet[6834]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Aug 29 20:46:08 old-k8s-version-032002 kubelet[6834]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000a7f3e0, 0xc000c460c0)
	Aug 29 20:46:08 old-k8s-version-032002 kubelet[6834]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Aug 29 20:46:08 old-k8s-version-032002 kubelet[6834]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Aug 29 20:46:08 old-k8s-version-032002 kubelet[6834]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Aug 29 20:46:08 old-k8s-version-032002 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 29 20:46:08 old-k8s-version-032002 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 29 20:46:09 old-k8s-version-032002 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 142.
	Aug 29 20:46:09 old-k8s-version-032002 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 29 20:46:09 old-k8s-version-032002 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 29 20:46:09 old-k8s-version-032002 kubelet[6860]: I0829 20:46:09.275474    6860 server.go:416] Version: v1.20.0
	Aug 29 20:46:09 old-k8s-version-032002 kubelet[6860]: I0829 20:46:09.276233    6860 server.go:837] Client rotation is on, will bootstrap in background
	Aug 29 20:46:09 old-k8s-version-032002 kubelet[6860]: I0829 20:46:09.279457    6860 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 29 20:46:09 old-k8s-version-032002 kubelet[6860]: I0829 20:46:09.280749    6860 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 29 20:46:09 old-k8s-version-032002 kubelet[6860]: W0829 20:46:09.281041    6860 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-032002 -n old-k8s-version-032002
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-032002 -n old-k8s-version-032002: exit status 2 (227.579095ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-032002" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (157.92s)

                                                
                                    

Test pass (254/320)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.91
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 4.11
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.05
18 TestDownloadOnly/v1.31.0/DeleteAll 0.13
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.58
22 TestOffline 104.74
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 175.3
31 TestAddons/serial/GCPAuth/Namespaces 0.15
35 TestAddons/parallel/InspektorGadget 12.08
37 TestAddons/parallel/HelmTiller 9.8
39 TestAddons/parallel/CSI 50.32
40 TestAddons/parallel/Headlamp 19.66
41 TestAddons/parallel/CloudSpanner 5.57
42 TestAddons/parallel/LocalPath 55.43
43 TestAddons/parallel/NvidiaDevicePlugin 6.49
44 TestAddons/parallel/Yakd 10.71
45 TestAddons/StoppedEnableDisable 7.53
46 TestCertOptions 49.7
47 TestCertExpiration 263.53
49 TestForceSystemdFlag 54.51
50 TestForceSystemdEnv 68.99
52 TestKVMDriverInstallOrUpdate 3.53
56 TestErrorSpam/setup 41.47
57 TestErrorSpam/start 0.32
58 TestErrorSpam/status 0.69
59 TestErrorSpam/pause 1.53
60 TestErrorSpam/unpause 1.84
61 TestErrorSpam/stop 5.87
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 86.6
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 38.41
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 5.87
73 TestFunctional/serial/CacheCmd/cache/add_local 1.56
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.65
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
81 TestFunctional/serial/ExtraConfig 34.07
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.38
84 TestFunctional/serial/LogsFileCmd 1.41
85 TestFunctional/serial/InvalidService 4.04
87 TestFunctional/parallel/ConfigCmd 0.31
88 TestFunctional/parallel/DashboardCmd 23.43
89 TestFunctional/parallel/DryRun 0.26
90 TestFunctional/parallel/InternationalLanguage 0.13
91 TestFunctional/parallel/StatusCmd 1.29
95 TestFunctional/parallel/ServiceCmdConnect 8.56
96 TestFunctional/parallel/AddonsCmd 0.14
97 TestFunctional/parallel/PersistentVolumeClaim 35.13
99 TestFunctional/parallel/SSHCmd 0.42
100 TestFunctional/parallel/CpCmd 1.37
101 TestFunctional/parallel/MySQL 24.52
102 TestFunctional/parallel/FileSync 0.24
103 TestFunctional/parallel/CertSync 1.45
107 TestFunctional/parallel/NodeLabels 0.07
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.5
111 TestFunctional/parallel/License 0.22
112 TestFunctional/parallel/ServiceCmd/DeployApp 11.22
113 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
114 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
115 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
116 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
117 TestFunctional/parallel/ProfileCmd/profile_list 0.32
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.29
119 TestFunctional/parallel/MountCmd/any-port 21.02
120 TestFunctional/parallel/ServiceCmd/List 0.85
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.83
122 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
123 TestFunctional/parallel/ServiceCmd/Format 0.32
124 TestFunctional/parallel/ServiceCmd/URL 0.29
125 TestFunctional/parallel/MountCmd/specific-port 1.62
126 TestFunctional/parallel/MountCmd/VerifyCleanup 2.56
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
140 TestFunctional/parallel/ImageCommands/ImageBuild 2.22
141 TestFunctional/parallel/ImageCommands/Setup 1.11
142 TestFunctional/parallel/Version/short 0.05
143 TestFunctional/parallel/Version/components 0.52
144 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.83
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.74
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.29
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.78
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
151 TestFunctional/delete_echo-server_images 0.03
152 TestFunctional/delete_my-image_image 0.01
153 TestFunctional/delete_minikube_cached_images 0.01
157 TestMultiControlPlane/serial/StartCluster 206.78
158 TestMultiControlPlane/serial/DeployApp 4.45
159 TestMultiControlPlane/serial/PingHostFromPods 1.16
160 TestMultiControlPlane/serial/AddWorkerNode 56.46
161 TestMultiControlPlane/serial/NodeLabels 0.06
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.52
163 TestMultiControlPlane/serial/CopyFile 12.47
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.48
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.37
169 TestMultiControlPlane/serial/DeleteSecondaryNode 16.51
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
172 TestMultiControlPlane/serial/RestartCluster 463.37
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.39
174 TestMultiControlPlane/serial/AddSecondaryNode 79.53
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.53
179 TestJSONOutput/start/Command 83.87
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.7
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.62
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.35
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.18
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 89.14
211 TestMountStart/serial/StartWithMountFirst 23.94
212 TestMountStart/serial/VerifyMountFirst 0.36
213 TestMountStart/serial/StartWithMountSecond 27.63
214 TestMountStart/serial/VerifyMountSecond 0.36
215 TestMountStart/serial/DeleteFirst 0.65
216 TestMountStart/serial/VerifyMountPostDelete 0.36
217 TestMountStart/serial/Stop 1.28
218 TestMountStart/serial/RestartStopped 21.91
219 TestMountStart/serial/VerifyMountPostStop 0.35
222 TestMultiNode/serial/FreshStart2Nodes 114.72
223 TestMultiNode/serial/DeployApp2Nodes 3.96
224 TestMultiNode/serial/PingHostFrom2Pods 0.76
225 TestMultiNode/serial/AddNode 48.95
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.21
228 TestMultiNode/serial/CopyFile 6.87
229 TestMultiNode/serial/StopNode 2.34
230 TestMultiNode/serial/StartAfterStop 38.54
232 TestMultiNode/serial/DeleteNode 2.21
234 TestMultiNode/serial/RestartMultiNode 184.88
235 TestMultiNode/serial/ValidateNameConflict 45.11
242 TestScheduledStopUnix 117
246 TestRunningBinaryUpgrade 158.87
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
252 TestNoKubernetes/serial/StartWithK8s 119.83
253 TestStoppedBinaryUpgrade/Setup 0.54
254 TestStoppedBinaryUpgrade/Upgrade 127.41
255 TestNoKubernetes/serial/StartWithStopK8s 39.45
256 TestNoKubernetes/serial/Start 38.75
257 TestStoppedBinaryUpgrade/MinikubeLogs 0.94
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.74
259 TestNoKubernetes/serial/ProfileList 1.36
268 TestPause/serial/Start 83.98
269 TestNoKubernetes/serial/Stop 1.29
270 TestNoKubernetes/serial/StartNoArgs 44.04
271 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
272 TestPause/serial/SecondStartNoReconfiguration 72.01
273 TestPause/serial/Pause 0.78
274 TestPause/serial/VerifyStatus 0.25
275 TestPause/serial/Unpause 0.76
276 TestPause/serial/PauseAgain 0.92
277 TestPause/serial/DeletePaused 1.4
278 TestPause/serial/VerifyDeletedResources 5.22
286 TestNetworkPlugins/group/false 3
293 TestStartStop/group/no-preload/serial/FirstStart 133.56
295 TestStartStop/group/embed-certs/serial/FirstStart 141.22
296 TestStartStop/group/no-preload/serial/DeployApp 8.3
298 TestStartStop/group/newest-cni/serial/FirstStart 47.89
299 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.02
301 TestStartStop/group/embed-certs/serial/DeployApp 8.28
302 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.91
304 TestStartStop/group/newest-cni/serial/DeployApp 0
305 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.19
306 TestStartStop/group/newest-cni/serial/Stop 11.33
307 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
308 TestStartStop/group/newest-cni/serial/SecondStart 70.61
309 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
310 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
311 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
312 TestStartStop/group/newest-cni/serial/Pause 2.17
315 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 52.32
319 TestStartStop/group/no-preload/serial/SecondStart 679.09
320 TestStartStop/group/embed-certs/serial/SecondStart 569.98
321 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.26
322 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.97
324 TestStartStop/group/old-k8s-version/serial/Stop 4.29
325 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
328 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 459.7
337 TestNetworkPlugins/group/auto/Start 83.63
338 TestNetworkPlugins/group/kindnet/Start 85.99
339 TestNetworkPlugins/group/calico/Start 124.81
340 TestNetworkPlugins/group/auto/KubeletFlags 0.31
341 TestNetworkPlugins/group/auto/NetCatPod 12.93
342 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
343 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
344 TestNetworkPlugins/group/kindnet/NetCatPod 12.25
345 TestNetworkPlugins/group/auto/DNS 0.17
346 TestNetworkPlugins/group/auto/Localhost 0.14
347 TestNetworkPlugins/group/auto/HairPin 0.14
348 TestNetworkPlugins/group/kindnet/DNS 0.18
349 TestNetworkPlugins/group/kindnet/Localhost 0.15
350 TestNetworkPlugins/group/kindnet/HairPin 0.21
351 TestNetworkPlugins/group/custom-flannel/Start 70.12
352 TestNetworkPlugins/group/enable-default-cni/Start 71.89
353 TestNetworkPlugins/group/calico/ControllerPod 6.01
354 TestNetworkPlugins/group/calico/KubeletFlags 0.19
355 TestNetworkPlugins/group/calico/NetCatPod 12.22
356 TestNetworkPlugins/group/calico/DNS 0.19
357 TestNetworkPlugins/group/calico/Localhost 0.13
358 TestNetworkPlugins/group/calico/HairPin 0.15
359 TestNetworkPlugins/group/flannel/Start 70.19
360 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
361 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.21
362 TestNetworkPlugins/group/custom-flannel/DNS 0.18
363 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
364 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
365 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
366 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.6
367 TestNetworkPlugins/group/enable-default-cni/DNS 16.11
368 TestNetworkPlugins/group/bridge/Start 57.16
369 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
370 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
371 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
372 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.56
373 TestNetworkPlugins/group/flannel/ControllerPod 6.01
374 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
375 TestNetworkPlugins/group/flannel/NetCatPod 10.24
376 TestNetworkPlugins/group/flannel/DNS 0.15
377 TestNetworkPlugins/group/flannel/Localhost 0.13
378 TestNetworkPlugins/group/flannel/HairPin 0.14
379 TestNetworkPlugins/group/bridge/KubeletFlags 0.19
380 TestNetworkPlugins/group/bridge/NetCatPod 10.21
381 TestNetworkPlugins/group/bridge/DNS 25.98
382 TestNetworkPlugins/group/bridge/Localhost 0.13
383 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.20.0/json-events (8.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-800504 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-800504 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.906740612s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-800504
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-800504: exit status 85 (53.408015ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-800504 | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC |          |
	|         | -p download-only-800504        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:55:35
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:55:35.900758   18373 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:55:35.900885   18373 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:55:35.900895   18373 out.go:358] Setting ErrFile to fd 2...
	I0829 18:55:35.900901   18373 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:55:35.901086   18373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	W0829 18:55:35.901204   18373 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19530-11185/.minikube/config/config.json: open /home/jenkins/minikube-integration/19530-11185/.minikube/config/config.json: no such file or directory
	I0829 18:55:35.901734   18373 out.go:352] Setting JSON to true
	I0829 18:55:35.902648   18373 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2283,"bootTime":1724955453,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:55:35.902699   18373 start.go:139] virtualization: kvm guest
	I0829 18:55:35.904950   18373 out.go:97] [download-only-800504] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0829 18:55:35.905045   18373 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball: no such file or directory
	I0829 18:55:35.905083   18373 notify.go:220] Checking for updates...
	I0829 18:55:35.906334   18373 out.go:169] MINIKUBE_LOCATION=19530
	I0829 18:55:35.907656   18373 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:55:35.908922   18373 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 18:55:35.910291   18373 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 18:55:35.911722   18373 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0829 18:55:35.913950   18373 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0829 18:55:35.914156   18373 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:55:36.015691   18373 out.go:97] Using the kvm2 driver based on user configuration
	I0829 18:55:36.015732   18373 start.go:297] selected driver: kvm2
	I0829 18:55:36.015739   18373 start.go:901] validating driver "kvm2" against <nil>
	I0829 18:55:36.016048   18373 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:55:36.016162   18373 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19530-11185/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 18:55:36.030035   18373 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 18:55:36.030075   18373 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:55:36.030573   18373 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0829 18:55:36.030731   18373 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0829 18:55:36.030789   18373 cni.go:84] Creating CNI manager for ""
	I0829 18:55:36.030801   18373 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 18:55:36.030808   18373 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 18:55:36.030852   18373 start.go:340] cluster config:
	{Name:download-only-800504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-800504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:55:36.031008   18373 iso.go:125] acquiring lock: {Name:mk1c9d3ac7f423dd4657884e37bdf4359f6328d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:55:36.032653   18373 out.go:97] Downloading VM boot image ...
	I0829 18:55:36.032678   18373 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19530-11185/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso
	I0829 18:55:39.037797   18373 out.go:97] Starting "download-only-800504" primary control-plane node in "download-only-800504" cluster
	I0829 18:55:39.037825   18373 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 18:55:39.069452   18373 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0829 18:55:39.069484   18373 cache.go:56] Caching tarball of preloaded images
	I0829 18:55:39.069691   18373 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 18:55:39.071371   18373 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0829 18:55:39.071391   18373 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0829 18:55:39.110643   18373 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19530-11185/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-800504 host does not exist
	  To start a cluster, run: "minikube start -p download-only-800504"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-800504
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (4.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-273933 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-273933 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.107789951s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (4.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-273933
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-273933: exit status 85 (53.360419ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-800504 | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC |                     |
	|         | -p download-only-800504        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC | 29 Aug 24 18:55 UTC |
	| delete  | -p download-only-800504        | download-only-800504 | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC | 29 Aug 24 18:55 UTC |
	| start   | -o=json --download-only        | download-only-273933 | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC |                     |
	|         | -p download-only-273933        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:55:45
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:55:45.110585   18583 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:55:45.110842   18583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:55:45.110851   18583 out.go:358] Setting ErrFile to fd 2...
	I0829 18:55:45.110856   18583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:55:45.111039   18583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 18:55:45.111587   18583 out.go:352] Setting JSON to true
	I0829 18:55:45.112425   18583 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2292,"bootTime":1724955453,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:55:45.112481   18583 start.go:139] virtualization: kvm guest
	I0829 18:55:45.114484   18583 out.go:97] [download-only-273933] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:55:45.114669   18583 notify.go:220] Checking for updates...
	I0829 18:55:45.115971   18583 out.go:169] MINIKUBE_LOCATION=19530
	I0829 18:55:45.117364   18583 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:55:45.118665   18583 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 18:55:45.119953   18583 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 18:55:45.121120   18583 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-273933 host does not exist
	  To start a cluster, run: "minikube start -p download-only-273933"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-273933
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-124601 --alsologtostderr --binary-mirror http://127.0.0.1:41153 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-124601" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-124601
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (104.74s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-442770 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-442770 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m43.288492793s)
helpers_test.go:175: Cleaning up "offline-crio-442770" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-442770
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-442770: (1.455635928s)
--- PASS: TestOffline (104.74s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-344587
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-344587: exit status 85 (47.374415ms)

                                                
                                                
-- stdout --
	* Profile "addons-344587" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-344587"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-344587
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-344587: exit status 85 (55.4455ms)

                                                
                                                
-- stdout --
	* Profile "addons-344587" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-344587"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (175.3s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-344587 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-344587 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m55.296397506s)
--- PASS: TestAddons/Setup (175.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-344587 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-344587 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.08s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jhzqm" [73958074-8dc1-464e-83fe-88616e2a3ba3] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004430655s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-344587
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-344587: (6.073350178s)
--- PASS: TestAddons/parallel/InspektorGadget (12.08s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.8s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.25884ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-bxws5" [d2380d68-348a-4dc1-8c40-1a4e9fa6ab04] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004392649s
addons_test.go:475: (dbg) Run:  kubectl --context addons-344587 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-344587 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.154314658s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-344587 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.80s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 9.60524ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-344587 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-344587 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-344587 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-344587 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-344587 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-344587 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-344587 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-344587 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-344587 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-344587 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [2ce4469a-59d3-41db-ad8d-134e8c5dbdbd] Pending
helpers_test.go:344: "task-pv-pod" [2ce4469a-59d3-41db-ad8d-134e8c5dbdbd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [2ce4469a-59d3-41db-ad8d-134e8c5dbdbd] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.003605648s
addons_test.go:590: (dbg) Run:  kubectl --context addons-344587 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-344587 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-344587 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-344587 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-344587 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-344587 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-344587 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-344587 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-344587 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-344587 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-344587 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-344587 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-344587 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-344587 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-344587 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-344587 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-344587 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-344587 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-344587 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [2b8f4242-8b74-4e63-826e-3f4585bab03f] Pending
helpers_test.go:344: "task-pv-pod-restore" [2b8f4242-8b74-4e63-826e-3f4585bab03f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [2b8f4242-8b74-4e63-826e-3f4585bab03f] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004474334s
addons_test.go:632: (dbg) Run:  kubectl --context addons-344587 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-344587 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-344587 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-344587 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-344587 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.859385995s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-344587 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (50.32s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-344587 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-8kp2k" [5858e660-705d-4449-9523-2c3b39c58625] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-8kp2k" [5858e660-705d-4449-9523-2c3b39c58625] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-8kp2k" [5858e660-705d-4449-9523-2c3b39c58625] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.006279031s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-344587 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-344587 addons disable headlamp --alsologtostderr -v=1: (5.689979474s)
--- PASS: TestAddons/parallel/Headlamp (19.66s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-mk2lt" [0aeb7893-5a6a-4410-8e62-2b9cc89c2e1d] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004808334s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-344587
--- PASS: TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.43s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-344587 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-344587 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-344587 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-344587 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-344587 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-344587 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-344587 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-344587 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-344587 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b2aee8f3-91eb-4f76-8271-9f9c888a8ccf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b2aee8f3-91eb-4f76-8271-9f9c888a8ccf] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b2aee8f3-91eb-4f76-8271-9f9c888a8ccf] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003293444s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-344587 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-344587 ssh "cat /opt/local-path-provisioner/pvc-d653ba56-6232-4797-9e26-74b3f827dc87_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-344587 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-344587 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-344587 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-344587 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.6227029s)
--- PASS: TestAddons/parallel/LocalPath (55.43s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-z559z" [f30c9660-ea3d-40c2-9842-bcf8bb18c0b6] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004438914s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-344587
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.49s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-k8bsv" [65762b52-589a-4074-9adf-a868b9cee3eb] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004356776s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-344587 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-344587 addons disable yakd --alsologtostderr -v=1: (5.707305811s)
--- PASS: TestAddons/parallel/Yakd (10.71s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (7.53s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-344587
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-344587: (7.276431625s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-344587
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-344587
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-344587
--- PASS: TestAddons/StoppedEnableDisable (7.53s)

                                                
                                    
x
+
TestCertOptions (49.7s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-323073 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-323073 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (48.045392867s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-323073 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-323073 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-323073 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-323073" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-323073
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-323073: (1.130210404s)
--- PASS: TestCertOptions (49.70s)

                                                
                                    
x
+
TestCertExpiration (263.53s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-621378 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-621378 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (43.570390788s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-621378 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-621378 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (39.1426935s)
helpers_test.go:175: Cleaning up "cert-expiration-621378" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-621378
--- PASS: TestCertExpiration (263.53s)

                                                
                                    
x
+
TestForceSystemdFlag (54.51s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-298108 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-298108 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (53.285236171s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-298108 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-298108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-298108
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-298108: (1.019636226s)
--- PASS: TestForceSystemdFlag (54.51s)

                                                
                                    
x
+
TestForceSystemdEnv (68.99s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-610471 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-610471 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m8.028517816s)
helpers_test.go:175: Cleaning up "force-systemd-env-610471" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-610471
--- PASS: TestForceSystemdEnv (68.99s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.53s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.53s)

                                                
                                    
x
+
TestErrorSpam/setup (41.47s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-332455 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-332455 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-332455 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-332455 --driver=kvm2  --container-runtime=crio: (41.472190919s)
--- PASS: TestErrorSpam/setup (41.47s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332455 --log_dir /tmp/nospam-332455 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332455 --log_dir /tmp/nospam-332455 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332455 --log_dir /tmp/nospam-332455 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332455 --log_dir /tmp/nospam-332455 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332455 --log_dir /tmp/nospam-332455 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332455 --log_dir /tmp/nospam-332455 status
--- PASS: TestErrorSpam/status (0.69s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332455 --log_dir /tmp/nospam-332455 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332455 --log_dir /tmp/nospam-332455 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332455 --log_dir /tmp/nospam-332455 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332455 --log_dir /tmp/nospam-332455 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332455 --log_dir /tmp/nospam-332455 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332455 --log_dir /tmp/nospam-332455 unpause
--- PASS: TestErrorSpam/unpause (1.84s)

                                                
                                    
x
+
TestErrorSpam/stop (5.87s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332455 --log_dir /tmp/nospam-332455 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-332455 --log_dir /tmp/nospam-332455 stop: (2.349204872s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332455 --log_dir /tmp/nospam-332455 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-332455 --log_dir /tmp/nospam-332455 stop: (2.001419636s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332455 --log_dir /tmp/nospam-332455 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-332455 --log_dir /tmp/nospam-332455 stop: (1.514579535s)
--- PASS: TestErrorSpam/stop (5.87s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19530-11185/.minikube/files/etc/test/nested/copy/18361/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (86.6s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-936043 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0829 19:13:45.975707   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:13:45.982467   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:13:45.993785   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:13:46.015124   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:13:46.056514   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:13:46.138018   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:13:46.299540   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:13:46.621184   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:13:47.263198   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:13:48.544805   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:13:51.106731   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:13:56.228660   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:06.470383   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:26.951917   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:15:07.914336   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-936043 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m26.596560396s)
--- PASS: TestFunctional/serial/StartWithProxy (86.60s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.41s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-936043 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-936043 --alsologtostderr -v=8: (38.409244472s)
functional_test.go:663: soft start took 38.409952061s for "functional-936043" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.41s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-936043 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-936043 cache add registry.k8s.io/pause:3.1: (2.168423682s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-936043 cache add registry.k8s.io/pause:3.3: (2.568031351s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-936043 cache add registry.k8s.io/pause:latest: (1.136213119s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-936043 /tmp/TestFunctionalserialCacheCmdcacheadd_local3614901479/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 cache add minikube-local-cache-test:functional-936043
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-936043 cache add minikube-local-cache-test:functional-936043: (1.237236798s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 cache delete minikube-local-cache-test:functional-936043
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-936043
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-936043 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (203.070028ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 kubectl -- --context functional-936043 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-936043 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-936043 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0829 19:16:29.836050   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-936043 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.066542576s)
functional_test.go:761: restart took 34.066688218s for "functional-936043" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.07s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-936043 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-936043 logs: (1.382301613s)
--- PASS: TestFunctional/serial/LogsCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 logs --file /tmp/TestFunctionalserialLogsFileCmd3382008058/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-936043 logs --file /tmp/TestFunctionalserialLogsFileCmd3382008058/001/logs.txt: (1.404327603s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.41s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.04s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-936043 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-936043
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-936043: exit status 115 (261.382834ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.111:31280 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-936043 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.04s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-936043 config get cpus: exit status 14 (47.422573ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-936043 config get cpus: exit status 14 (48.483517ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (23.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-936043 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-936043 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 28046: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (23.43s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-936043 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-936043 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (132.128204ms)

                                                
                                                
-- stdout --
	* [functional-936043] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19530
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:16:40.814706   27954 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:16:40.814859   27954 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:16:40.814870   27954 out.go:358] Setting ErrFile to fd 2...
	I0829 19:16:40.814876   27954 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:16:40.815057   27954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 19:16:40.815580   27954 out.go:352] Setting JSON to false
	I0829 19:16:40.816560   27954 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3548,"bootTime":1724955453,"procs":252,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 19:16:40.816621   27954 start.go:139] virtualization: kvm guest
	I0829 19:16:40.818828   27954 out.go:177] * [functional-936043] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 19:16:40.820937   27954 notify.go:220] Checking for updates...
	I0829 19:16:40.820945   27954 out.go:177]   - MINIKUBE_LOCATION=19530
	I0829 19:16:40.822335   27954 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:16:40.823752   27954 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 19:16:40.825257   27954 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 19:16:40.826592   27954 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:16:40.827983   27954 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:16:40.829633   27954 config.go:182] Loaded profile config "functional-936043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:16:40.830017   27954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:16:40.830063   27954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:16:40.844938   27954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43879
	I0829 19:16:40.845354   27954 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:16:40.845950   27954 main.go:141] libmachine: Using API Version  1
	I0829 19:16:40.845976   27954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:16:40.846377   27954 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:16:40.846565   27954 main.go:141] libmachine: (functional-936043) Calling .DriverName
	I0829 19:16:40.846811   27954 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:16:40.847094   27954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:16:40.847126   27954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:16:40.862171   27954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35801
	I0829 19:16:40.862602   27954 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:16:40.863103   27954 main.go:141] libmachine: Using API Version  1
	I0829 19:16:40.863128   27954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:16:40.863449   27954 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:16:40.863643   27954 main.go:141] libmachine: (functional-936043) Calling .DriverName
	I0829 19:16:40.897268   27954 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 19:16:40.898634   27954 start.go:297] selected driver: kvm2
	I0829 19:16:40.898647   27954 start.go:901] validating driver "kvm2" against &{Name:functional-936043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-936043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.111 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:16:40.898757   27954 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:16:40.900874   27954 out.go:201] 
	W0829 19:16:40.902097   27954 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0829 19:16:40.903341   27954 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-936043 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-936043 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-936043 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (133.444721ms)

                                                
                                                
-- stdout --
	* [functional-936043] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19530
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:16:40.682128   27926 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:16:40.682242   27926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:16:40.682250   27926 out.go:358] Setting ErrFile to fd 2...
	I0829 19:16:40.682254   27926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:16:40.682503   27926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 19:16:40.683038   27926 out.go:352] Setting JSON to false
	I0829 19:16:40.683960   27926 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3548,"bootTime":1724955453,"procs":250,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 19:16:40.684018   27926 start.go:139] virtualization: kvm guest
	I0829 19:16:40.685892   27926 out.go:177] * [functional-936043] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0829 19:16:40.687189   27926 out.go:177]   - MINIKUBE_LOCATION=19530
	I0829 19:16:40.687212   27926 notify.go:220] Checking for updates...
	I0829 19:16:40.689971   27926 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:16:40.691212   27926 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 19:16:40.692565   27926 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 19:16:40.693907   27926 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:16:40.695284   27926 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:16:40.696939   27926 config.go:182] Loaded profile config "functional-936043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:16:40.697299   27926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:16:40.697373   27926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:16:40.712269   27926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36361
	I0829 19:16:40.712654   27926 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:16:40.713194   27926 main.go:141] libmachine: Using API Version  1
	I0829 19:16:40.713216   27926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:16:40.713516   27926 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:16:40.713715   27926 main.go:141] libmachine: (functional-936043) Calling .DriverName
	I0829 19:16:40.713999   27926 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:16:40.714289   27926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:16:40.714335   27926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:16:40.728946   27926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44349
	I0829 19:16:40.729379   27926 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:16:40.730042   27926 main.go:141] libmachine: Using API Version  1
	I0829 19:16:40.730067   27926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:16:40.730413   27926 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:16:40.730623   27926 main.go:141] libmachine: (functional-936043) Calling .DriverName
	I0829 19:16:40.762789   27926 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0829 19:16:40.764300   27926 start.go:297] selected driver: kvm2
	I0829 19:16:40.764310   27926 start.go:901] validating driver "kvm2" against &{Name:functional-936043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-936043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.111 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:16:40.764428   27926 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:16:40.766735   27926 out.go:201] 
	W0829 19:16:40.768108   27926 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0829 19:16:40.769433   27926 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-936043 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-936043 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-4vl6r" [c8a90ea2-50a2-4111-928a-a2e2f5c600a4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-4vl6r" [c8a90ea2-50a2-4111-928a-a2e2f5c600a4] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004905223s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.111:31631
functional_test.go:1675: http://192.168.39.111:31631: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-4vl6r

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.111:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.111:31631
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.56s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 addons list
2024/08/29 19:17:04 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (35.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7cdd03ad-4b0c-4779-a221-c6b257fdfdc8] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004129515s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-936043 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-936043 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-936043 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-936043 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-936043 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [649a7712-b831-428d-80df-f7eb57320057] Pending
helpers_test.go:344: "sp-pod" [649a7712-b831-428d-80df-f7eb57320057] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [649a7712-b831-428d-80df-f7eb57320057] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 19.005103106s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-936043 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-936043 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-936043 delete -f testdata/storage-provisioner/pod.yaml: (1.082927157s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-936043 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bfabcf9a-b180-49a4-bef4-d5d5662e2991] Pending
helpers_test.go:344: "sp-pod" [bfabcf9a-b180-49a4-bef4-d5d5662e2991] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bfabcf9a-b180-49a4-bef4-d5d5662e2991] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004299599s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-936043 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (35.13s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh -n functional-936043 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 cp functional-936043:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2504795259/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh -n functional-936043 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh -n functional-936043 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-936043 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-fp26b" [eecb3a3d-86ba-4b63-ad54-a48bce12cde7] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-fp26b" [eecb3a3d-86ba-4b63-ad54-a48bce12cde7] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.603208049s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-936043 exec mysql-6cdb49bbb-fp26b -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-936043 exec mysql-6cdb49bbb-fp26b -- mysql -ppassword -e "show databases;": exit status 1 (204.651965ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-936043 exec mysql-6cdb49bbb-fp26b -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.52s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/18361/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh "sudo cat /etc/test/nested/copy/18361/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/18361.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh "sudo cat /etc/ssl/certs/18361.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/18361.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh "sudo cat /usr/share/ca-certificates/18361.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/183612.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh "sudo cat /etc/ssl/certs/183612.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/183612.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh "sudo cat /usr/share/ca-certificates/183612.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-936043 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-936043 ssh "sudo systemctl is-active docker": exit status 1 (269.758428ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-936043 ssh "sudo systemctl is-active containerd": exit status 1 (230.716417ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-936043 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-936043 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-zkx49" [6adc3760-5b6b-468b-9cf4-fef3ed6dcab0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-zkx49" [6adc3760-5b6b-468b-9cf4-fef3ed6dcab0] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.005188806s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "269.30232ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "48.54246ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "237.810563ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "52.578876ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (21.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-936043 /tmp/TestFunctionalparallelMountCmdany-port3159099742/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724958998867523962" to /tmp/TestFunctionalparallelMountCmdany-port3159099742/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724958998867523962" to /tmp/TestFunctionalparallelMountCmdany-port3159099742/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724958998867523962" to /tmp/TestFunctionalparallelMountCmdany-port3159099742/001/test-1724958998867523962
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-936043 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (233.574095ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 29 19:16 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 29 19:16 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 29 19:16 test-1724958998867523962
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh cat /mount-9p/test-1724958998867523962
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-936043 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d1e57541-a426-4e24-8fda-214115783892] Pending
helpers_test.go:344: "busybox-mount" [d1e57541-a426-4e24-8fda-214115783892] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d1e57541-a426-4e24-8fda-214115783892] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d1e57541-a426-4e24-8fda-214115783892] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 18.192298254s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-936043 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-936043 /tmp/TestFunctionalparallelMountCmdany-port3159099742/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (21.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 service list -o json
functional_test.go:1494: Took "829.445288ms" to run "out/minikube-linux-amd64 -p functional-936043 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.111:30253
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.111:30253
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-936043 /tmp/TestFunctionalparallelMountCmdspecific-port1231705637/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-936043 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (239.218726ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-936043 /tmp/TestFunctionalparallelMountCmdspecific-port1231705637/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-936043 ssh "sudo umount -f /mount-9p": exit status 1 (224.693539ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-936043 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-936043 /tmp/TestFunctionalparallelMountCmdspecific-port1231705637/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-936043 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2779488043/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-936043 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2779488043/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-936043 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2779488043/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-936043 ssh "findmnt -T" /mount1: exit status 1 (214.2387ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-936043 ssh "findmnt -T" /mount1: exit status 1 (185.832136ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-936043 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-936043 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2779488043/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-936043 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2779488043/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-936043 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2779488043/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-936043 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-936043
localhost/kicbase/echo-server:functional-936043
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-936043 image ls --format short --alsologtostderr:
I0829 19:17:14.611972   29559 out.go:345] Setting OutFile to fd 1 ...
I0829 19:17:14.612208   29559 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:17:14.612216   29559 out.go:358] Setting ErrFile to fd 2...
I0829 19:17:14.612223   29559 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:17:14.612515   29559 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
I0829 19:17:14.613074   29559 config.go:182] Loaded profile config "functional-936043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 19:17:14.613165   29559 config.go:182] Loaded profile config "functional-936043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 19:17:14.613541   29559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0829 19:17:14.613592   29559 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 19:17:14.627363   29559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45117
I0829 19:17:14.627816   29559 main.go:141] libmachine: () Calling .GetVersion
I0829 19:17:14.628391   29559 main.go:141] libmachine: Using API Version  1
I0829 19:17:14.628414   29559 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 19:17:14.628813   29559 main.go:141] libmachine: () Calling .GetMachineName
I0829 19:17:14.628996   29559 main.go:141] libmachine: (functional-936043) Calling .GetState
I0829 19:17:14.631240   29559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0829 19:17:14.631282   29559 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 19:17:14.646902   29559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37233
I0829 19:17:14.647284   29559 main.go:141] libmachine: () Calling .GetVersion
I0829 19:17:14.647705   29559 main.go:141] libmachine: Using API Version  1
I0829 19:17:14.647737   29559 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 19:17:14.648135   29559 main.go:141] libmachine: () Calling .GetMachineName
I0829 19:17:14.648321   29559 main.go:141] libmachine: (functional-936043) Calling .DriverName
I0829 19:17:14.648483   29559 ssh_runner.go:195] Run: systemctl --version
I0829 19:17:14.648514   29559 main.go:141] libmachine: (functional-936043) Calling .GetSSHHostname
I0829 19:17:14.651637   29559 main.go:141] libmachine: (functional-936043) DBG | domain functional-936043 has defined MAC address 52:54:00:a4:2f:2a in network mk-functional-936043
I0829 19:17:14.652161   29559 main.go:141] libmachine: (functional-936043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:2f:2a", ip: ""} in network mk-functional-936043: {Iface:virbr1 ExpiryTime:2024-08-29 20:13:56 +0000 UTC Type:0 Mac:52:54:00:a4:2f:2a Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:functional-936043 Clientid:01:52:54:00:a4:2f:2a}
I0829 19:17:14.652247   29559 main.go:141] libmachine: (functional-936043) DBG | domain functional-936043 has defined IP address 192.168.39.111 and MAC address 52:54:00:a4:2f:2a in network mk-functional-936043
I0829 19:17:14.652470   29559 main.go:141] libmachine: (functional-936043) Calling .GetSSHPort
I0829 19:17:14.652712   29559 main.go:141] libmachine: (functional-936043) Calling .GetSSHKeyPath
I0829 19:17:14.652844   29559 main.go:141] libmachine: (functional-936043) Calling .GetSSHUsername
I0829 19:17:14.652937   29559 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/functional-936043/id_rsa Username:docker}
I0829 19:17:14.739144   29559 ssh_runner.go:195] Run: sudo crictl images --output json
I0829 19:17:14.825703   29559 main.go:141] libmachine: Making call to close driver server
I0829 19:17:14.825717   29559 main.go:141] libmachine: (functional-936043) Calling .Close
I0829 19:17:14.826041   29559 main.go:141] libmachine: Successfully made call to close driver server
I0829 19:17:14.826060   29559 main.go:141] libmachine: Making call to close connection to plugin binary
I0829 19:17:14.826068   29559 main.go:141] libmachine: Making call to close driver server
I0829 19:17:14.826074   29559 main.go:141] libmachine: (functional-936043) Calling .Close
I0829 19:17:14.826639   29559 main.go:141] libmachine: (functional-936043) DBG | Closing plugin on server side
I0829 19:17:14.826674   29559 main.go:141] libmachine: Successfully made call to close driver server
I0829 19:17:14.826687   29559 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-936043 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/minikube-local-cache-test     | functional-936043  | 98e828f1d2e97 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.31.0            | 604f5db92eaa8 | 95.2MB |
| registry.k8s.io/kube-controller-manager | v1.31.0            | 045733566833c | 89.4MB |
| registry.k8s.io/kube-proxy              | v1.31.0            | ad83b2ca7b09e | 92.7MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| localhost/kicbase/echo-server           | functional-936043  | 9056ab77afb8e | 4.94MB |
| docker.io/library/nginx                 | latest             | 5ef79149e0ec8 | 192MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-scheduler          | v1.31.0            | 1766f54c897f0 | 68.4MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | 917d7814b9b5b | 87.2MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-936043 image ls --format table --alsologtostderr:
I0829 19:17:14.850394   29627 out.go:345] Setting OutFile to fd 1 ...
I0829 19:17:14.850649   29627 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:17:14.850662   29627 out.go:358] Setting ErrFile to fd 2...
I0829 19:17:14.850667   29627 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:17:14.850859   29627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
I0829 19:17:14.851398   29627 config.go:182] Loaded profile config "functional-936043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 19:17:14.851490   29627 config.go:182] Loaded profile config "functional-936043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 19:17:14.851842   29627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0829 19:17:14.851892   29627 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 19:17:14.867510   29627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43497
I0829 19:17:14.867931   29627 main.go:141] libmachine: () Calling .GetVersion
I0829 19:17:14.868495   29627 main.go:141] libmachine: Using API Version  1
I0829 19:17:14.868519   29627 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 19:17:14.868847   29627 main.go:141] libmachine: () Calling .GetMachineName
I0829 19:17:14.869059   29627 main.go:141] libmachine: (functional-936043) Calling .GetState
I0829 19:17:14.871092   29627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0829 19:17:14.871138   29627 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 19:17:14.886356   29627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34219
I0829 19:17:14.886769   29627 main.go:141] libmachine: () Calling .GetVersion
I0829 19:17:14.887242   29627 main.go:141] libmachine: Using API Version  1
I0829 19:17:14.887269   29627 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 19:17:14.887602   29627 main.go:141] libmachine: () Calling .GetMachineName
I0829 19:17:14.887754   29627 main.go:141] libmachine: (functional-936043) Calling .DriverName
I0829 19:17:14.887908   29627 ssh_runner.go:195] Run: systemctl --version
I0829 19:17:14.887925   29627 main.go:141] libmachine: (functional-936043) Calling .GetSSHHostname
I0829 19:17:14.890958   29627 main.go:141] libmachine: (functional-936043) DBG | domain functional-936043 has defined MAC address 52:54:00:a4:2f:2a in network mk-functional-936043
I0829 19:17:14.891304   29627 main.go:141] libmachine: (functional-936043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:2f:2a", ip: ""} in network mk-functional-936043: {Iface:virbr1 ExpiryTime:2024-08-29 20:13:56 +0000 UTC Type:0 Mac:52:54:00:a4:2f:2a Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:functional-936043 Clientid:01:52:54:00:a4:2f:2a}
I0829 19:17:14.891329   29627 main.go:141] libmachine: (functional-936043) DBG | domain functional-936043 has defined IP address 192.168.39.111 and MAC address 52:54:00:a4:2f:2a in network mk-functional-936043
I0829 19:17:14.891448   29627 main.go:141] libmachine: (functional-936043) Calling .GetSSHPort
I0829 19:17:14.891590   29627 main.go:141] libmachine: (functional-936043) Calling .GetSSHKeyPath
I0829 19:17:14.891731   29627 main.go:141] libmachine: (functional-936043) Calling .GetSSHUsername
I0829 19:17:14.891863   29627 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/functional-936043/id_rsa Username:docker}
I0829 19:17:14.968845   29627 ssh_runner.go:195] Run: sudo crictl images --output json
I0829 19:17:15.015761   29627 main.go:141] libmachine: Making call to close driver server
I0829 19:17:15.015776   29627 main.go:141] libmachine: (functional-936043) Calling .Close
I0829 19:17:15.016043   29627 main.go:141] libmachine: Successfully made call to close driver server
I0829 19:17:15.016071   29627 main.go:141] libmachine: Making call to close connection to plugin binary
I0829 19:17:15.016080   29627 main.go:141] libmachine: Making call to close driver server
I0829 19:17:15.016088   29627 main.go:141] libmachine: (functional-936043) Calling .Close
I0829 19:17:15.016317   29627 main.go:141] libmachine: Successfully made call to close driver server
I0829 19:17:15.016335   29627 main.go:141] libmachine: Making call to close connection to plugin binary
I0829 19:17:15.016409   29627 main.go:141] libmachine: (functional-936043) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-936043 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-936043"],"size":"4943877"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.3
1.0"],"size":"89437512"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":["registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"92728217"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5ef79149e0ec84a7a9f9284c3f91aa3c20608f
8391f5445eabe92ef07dbda03c","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add","docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f"],"repoTags":["docker.io/library/nginx:latest"],"size":"191841612"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"95233506"},{"
id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":["registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a","registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"68420936"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256
:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"98e828f1d2e9745e9af18172b580c36d2ef24a0e1b96594ecfb2d9800b397339","repoDigests":["localhost/minikube-local-cache-test@sha256:42a913c8c62ed6d6da40ac6cd0403637780940864c9465177dd42ea5105fbd5f"],"repoTags":["localhost/minikube-local-cache-test:functional-936043"],"size":"3330"},{"id":"917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha2
56:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"87165492"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-936043 image ls --format json --alsologtostderr:
I0829 19:17:14.614176   29558 out.go:345] Setting OutFile to fd 1 ...
I0829 19:17:14.614264   29558 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:17:14.614272   29558 out.go:358] Setting ErrFile to fd 2...
I0829 19:17:14.614276   29558 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:17:14.614443   29558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
I0829 19:17:14.615004   29558 config.go:182] Loaded profile config "functional-936043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 19:17:14.615090   29558 config.go:182] Loaded profile config "functional-936043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 19:17:14.615426   29558 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0829 19:17:14.615462   29558 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 19:17:14.629199   29558 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36487
I0829 19:17:14.629633   29558 main.go:141] libmachine: () Calling .GetVersion
I0829 19:17:14.630224   29558 main.go:141] libmachine: Using API Version  1
I0829 19:17:14.630249   29558 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 19:17:14.630610   29558 main.go:141] libmachine: () Calling .GetMachineName
I0829 19:17:14.630925   29558 main.go:141] libmachine: (functional-936043) Calling .GetState
I0829 19:17:14.632613   29558 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0829 19:17:14.632642   29558 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 19:17:14.645590   29558 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38393
I0829 19:17:14.645911   29558 main.go:141] libmachine: () Calling .GetVersion
I0829 19:17:14.646430   29558 main.go:141] libmachine: Using API Version  1
I0829 19:17:14.646454   29558 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 19:17:14.646751   29558 main.go:141] libmachine: () Calling .GetMachineName
I0829 19:17:14.646891   29558 main.go:141] libmachine: (functional-936043) Calling .DriverName
I0829 19:17:14.647097   29558 ssh_runner.go:195] Run: systemctl --version
I0829 19:17:14.647121   29558 main.go:141] libmachine: (functional-936043) Calling .GetSSHHostname
I0829 19:17:14.650369   29558 main.go:141] libmachine: (functional-936043) DBG | domain functional-936043 has defined MAC address 52:54:00:a4:2f:2a in network mk-functional-936043
I0829 19:17:14.650757   29558 main.go:141] libmachine: (functional-936043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:2f:2a", ip: ""} in network mk-functional-936043: {Iface:virbr1 ExpiryTime:2024-08-29 20:13:56 +0000 UTC Type:0 Mac:52:54:00:a4:2f:2a Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:functional-936043 Clientid:01:52:54:00:a4:2f:2a}
I0829 19:17:14.650788   29558 main.go:141] libmachine: (functional-936043) DBG | domain functional-936043 has defined IP address 192.168.39.111 and MAC address 52:54:00:a4:2f:2a in network mk-functional-936043
I0829 19:17:14.651200   29558 main.go:141] libmachine: (functional-936043) Calling .GetSSHPort
I0829 19:17:14.651585   29558 main.go:141] libmachine: (functional-936043) Calling .GetSSHKeyPath
I0829 19:17:14.651746   29558 main.go:141] libmachine: (functional-936043) Calling .GetSSHUsername
I0829 19:17:14.651893   29558 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/functional-936043/id_rsa Username:docker}
I0829 19:17:14.734187   29558 ssh_runner.go:195] Run: sudo crictl images --output json
I0829 19:17:14.794018   29558 main.go:141] libmachine: Making call to close driver server
I0829 19:17:14.794031   29558 main.go:141] libmachine: (functional-936043) Calling .Close
I0829 19:17:14.794327   29558 main.go:141] libmachine: (functional-936043) DBG | Closing plugin on server side
I0829 19:17:14.794323   29558 main.go:141] libmachine: Successfully made call to close driver server
I0829 19:17:14.794372   29558 main.go:141] libmachine: Making call to close connection to plugin binary
I0829 19:17:14.794390   29558 main.go:141] libmachine: Making call to close driver server
I0829 19:17:14.794403   29558 main.go:141] libmachine: (functional-936043) Calling .Close
I0829 19:17:14.794659   29558 main.go:141] libmachine: Successfully made call to close driver server
I0829 19:17:14.794664   29558 main.go:141] libmachine: (functional-936043) DBG | Closing plugin on server side
I0829 19:17:14.794677   29558 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-936043 image ls --format yaml --alsologtostderr:
- id: 917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "87165492"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 98e828f1d2e9745e9af18172b580c36d2ef24a0e1b96594ecfb2d9800b397339
repoDigests:
- localhost/minikube-local-cache-test@sha256:42a913c8c62ed6d6da40ac6cd0403637780940864c9465177dd42ea5105fbd5f
repoTags:
- localhost/minikube-local-cache-test:functional-936043
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "92728217"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "68420936"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
- docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f
repoTags:
- docker.io/library/nginx:latest
size: "191841612"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-936043
size: "4943877"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "89437512"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "95233506"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-936043 image ls --format yaml --alsologtostderr:
I0829 19:17:14.610423   29557 out.go:345] Setting OutFile to fd 1 ...
I0829 19:17:14.610525   29557 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:17:14.610549   29557 out.go:358] Setting ErrFile to fd 2...
I0829 19:17:14.610557   29557 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:17:14.610765   29557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
I0829 19:17:14.611502   29557 config.go:182] Loaded profile config "functional-936043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 19:17:14.611640   29557 config.go:182] Loaded profile config "functional-936043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 19:17:14.612214   29557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0829 19:17:14.612263   29557 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 19:17:14.627326   29557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34759
I0829 19:17:14.627813   29557 main.go:141] libmachine: () Calling .GetVersion
I0829 19:17:14.628469   29557 main.go:141] libmachine: Using API Version  1
I0829 19:17:14.628493   29557 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 19:17:14.628803   29557 main.go:141] libmachine: () Calling .GetMachineName
I0829 19:17:14.628996   29557 main.go:141] libmachine: (functional-936043) Calling .GetState
I0829 19:17:14.631105   29557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0829 19:17:14.631138   29557 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 19:17:14.644691   29557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37877
I0829 19:17:14.645126   29557 main.go:141] libmachine: () Calling .GetVersion
I0829 19:17:14.645584   29557 main.go:141] libmachine: Using API Version  1
I0829 19:17:14.645609   29557 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 19:17:14.646027   29557 main.go:141] libmachine: () Calling .GetMachineName
I0829 19:17:14.646252   29557 main.go:141] libmachine: (functional-936043) Calling .DriverName
I0829 19:17:14.646680   29557 ssh_runner.go:195] Run: systemctl --version
I0829 19:17:14.646731   29557 main.go:141] libmachine: (functional-936043) Calling .GetSSHHostname
I0829 19:17:14.650270   29557 main.go:141] libmachine: (functional-936043) DBG | domain functional-936043 has defined MAC address 52:54:00:a4:2f:2a in network mk-functional-936043
I0829 19:17:14.650708   29557 main.go:141] libmachine: (functional-936043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:2f:2a", ip: ""} in network mk-functional-936043: {Iface:virbr1 ExpiryTime:2024-08-29 20:13:56 +0000 UTC Type:0 Mac:52:54:00:a4:2f:2a Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:functional-936043 Clientid:01:52:54:00:a4:2f:2a}
I0829 19:17:14.650729   29557 main.go:141] libmachine: (functional-936043) DBG | domain functional-936043 has defined IP address 192.168.39.111 and MAC address 52:54:00:a4:2f:2a in network mk-functional-936043
I0829 19:17:14.650805   29557 main.go:141] libmachine: (functional-936043) Calling .GetSSHPort
I0829 19:17:14.651020   29557 main.go:141] libmachine: (functional-936043) Calling .GetSSHKeyPath
I0829 19:17:14.651320   29557 main.go:141] libmachine: (functional-936043) Calling .GetSSHUsername
I0829 19:17:14.651480   29557 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/functional-936043/id_rsa Username:docker}
I0829 19:17:14.733739   29557 ssh_runner.go:195] Run: sudo crictl images --output json
I0829 19:17:14.801640   29557 main.go:141] libmachine: Making call to close driver server
I0829 19:17:14.801654   29557 main.go:141] libmachine: (functional-936043) Calling .Close
I0829 19:17:14.801908   29557 main.go:141] libmachine: (functional-936043) DBG | Closing plugin on server side
I0829 19:17:14.801933   29557 main.go:141] libmachine: Successfully made call to close driver server
I0829 19:17:14.801946   29557 main.go:141] libmachine: Making call to close connection to plugin binary
I0829 19:17:14.801961   29557 main.go:141] libmachine: Making call to close driver server
I0829 19:17:14.801974   29557 main.go:141] libmachine: (functional-936043) Calling .Close
I0829 19:17:14.802173   29557 main.go:141] libmachine: Successfully made call to close driver server
I0829 19:17:14.802187   29557 main.go:141] libmachine: (functional-936043) DBG | Closing plugin on server side
I0829 19:17:14.802196   29557 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-936043 ssh pgrep buildkitd: exit status 1 (195.083036ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 image build -t localhost/my-image:functional-936043 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-936043 image build -t localhost/my-image:functional-936043 testdata/build --alsologtostderr: (1.818320894s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-936043 image build -t localhost/my-image:functional-936043 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 1b0e9d565f8
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-936043
--> 77805bac547
Successfully tagged localhost/my-image:functional-936043
77805bac547a331c0de16c83cbbe9fd1682f2fe1163e1d224f4694026337c149
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-936043 image build -t localhost/my-image:functional-936043 testdata/build --alsologtostderr:
I0829 19:17:15.044088   29679 out.go:345] Setting OutFile to fd 1 ...
I0829 19:17:15.044442   29679 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:17:15.044464   29679 out.go:358] Setting ErrFile to fd 2...
I0829 19:17:15.044472   29679 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:17:15.044898   29679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
I0829 19:17:15.045917   29679 config.go:182] Loaded profile config "functional-936043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 19:17:15.046445   29679 config.go:182] Loaded profile config "functional-936043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 19:17:15.046866   29679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0829 19:17:15.046934   29679 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 19:17:15.061466   29679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34435
I0829 19:17:15.061913   29679 main.go:141] libmachine: () Calling .GetVersion
I0829 19:17:15.062410   29679 main.go:141] libmachine: Using API Version  1
I0829 19:17:15.062433   29679 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 19:17:15.062818   29679 main.go:141] libmachine: () Calling .GetMachineName
I0829 19:17:15.063007   29679 main.go:141] libmachine: (functional-936043) Calling .GetState
I0829 19:17:15.064691   29679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0829 19:17:15.064728   29679 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 19:17:15.079569   29679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35513
I0829 19:17:15.079935   29679 main.go:141] libmachine: () Calling .GetVersion
I0829 19:17:15.080374   29679 main.go:141] libmachine: Using API Version  1
I0829 19:17:15.080393   29679 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 19:17:15.080676   29679 main.go:141] libmachine: () Calling .GetMachineName
I0829 19:17:15.080867   29679 main.go:141] libmachine: (functional-936043) Calling .DriverName
I0829 19:17:15.081052   29679 ssh_runner.go:195] Run: systemctl --version
I0829 19:17:15.081076   29679 main.go:141] libmachine: (functional-936043) Calling .GetSSHHostname
I0829 19:17:15.083487   29679 main.go:141] libmachine: (functional-936043) DBG | domain functional-936043 has defined MAC address 52:54:00:a4:2f:2a in network mk-functional-936043
I0829 19:17:15.083798   29679 main.go:141] libmachine: (functional-936043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:2f:2a", ip: ""} in network mk-functional-936043: {Iface:virbr1 ExpiryTime:2024-08-29 20:13:56 +0000 UTC Type:0 Mac:52:54:00:a4:2f:2a Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:functional-936043 Clientid:01:52:54:00:a4:2f:2a}
I0829 19:17:15.083823   29679 main.go:141] libmachine: (functional-936043) DBG | domain functional-936043 has defined IP address 192.168.39.111 and MAC address 52:54:00:a4:2f:2a in network mk-functional-936043
I0829 19:17:15.083980   29679 main.go:141] libmachine: (functional-936043) Calling .GetSSHPort
I0829 19:17:15.084205   29679 main.go:141] libmachine: (functional-936043) Calling .GetSSHKeyPath
I0829 19:17:15.084367   29679 main.go:141] libmachine: (functional-936043) Calling .GetSSHUsername
I0829 19:17:15.084498   29679 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/functional-936043/id_rsa Username:docker}
I0829 19:17:15.161098   29679 build_images.go:161] Building image from path: /tmp/build.1327566774.tar
I0829 19:17:15.161148   29679 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0829 19:17:15.172577   29679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1327566774.tar
I0829 19:17:15.177343   29679 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1327566774.tar: stat -c "%s %y" /var/lib/minikube/build/build.1327566774.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1327566774.tar': No such file or directory
I0829 19:17:15.177372   29679 ssh_runner.go:362] scp /tmp/build.1327566774.tar --> /var/lib/minikube/build/build.1327566774.tar (3072 bytes)
I0829 19:17:15.201788   29679 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1327566774
I0829 19:17:15.211684   29679 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1327566774 -xf /var/lib/minikube/build/build.1327566774.tar
I0829 19:17:15.222205   29679 crio.go:315] Building image: /var/lib/minikube/build/build.1327566774
I0829 19:17:15.222263   29679 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-936043 /var/lib/minikube/build/build.1327566774 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0829 19:17:16.793890   29679 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-936043 /var/lib/minikube/build/build.1327566774 --cgroup-manager=cgroupfs: (1.571601122s)
I0829 19:17:16.793957   29679 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1327566774
I0829 19:17:16.804893   29679 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1327566774.tar
I0829 19:17:16.815216   29679 build_images.go:217] Built localhost/my-image:functional-936043 from /tmp/build.1327566774.tar
I0829 19:17:16.815246   29679 build_images.go:133] succeeded building to: functional-936043
I0829 19:17:16.815250   29679 build_images.go:134] failed building to: 
I0829 19:17:16.815272   29679 main.go:141] libmachine: Making call to close driver server
I0829 19:17:16.815286   29679 main.go:141] libmachine: (functional-936043) Calling .Close
I0829 19:17:16.815597   29679 main.go:141] libmachine: (functional-936043) DBG | Closing plugin on server side
I0829 19:17:16.815631   29679 main.go:141] libmachine: Successfully made call to close driver server
I0829 19:17:16.815648   29679 main.go:141] libmachine: Making call to close connection to plugin binary
I0829 19:17:16.815665   29679 main.go:141] libmachine: Making call to close driver server
I0829 19:17:16.815677   29679 main.go:141] libmachine: (functional-936043) Calling .Close
I0829 19:17:16.815892   29679 main.go:141] libmachine: Successfully made call to close driver server
I0829 19:17:16.815907   29679 main.go:141] libmachine: Making call to close connection to plugin binary
I0829 19:17:16.815918   29679 main.go:141] libmachine: (functional-936043) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.083736675s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-936043
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 image load --daemon kicbase/echo-server:functional-936043 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-936043 image load --daemon kicbase/echo-server:functional-936043 --alsologtostderr: (1.481375007s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 image load --daemon kicbase/echo-server:functional-936043 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p functional-936043 image load --daemon kicbase/echo-server:functional-936043 --alsologtostderr: (3.481024653s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-936043
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 image load --daemon kicbase/echo-server:functional-936043 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 image save kicbase/echo-server:functional-936043 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 image rm kicbase/echo-server:functional-936043 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-936043
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-936043 image save --daemon kicbase/echo-server:functional-936043 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-936043
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-936043
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-936043
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-936043
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (206.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-505269 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0829 19:18:45.975660   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:19:13.677399   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-505269 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m26.129596479s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (206.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-505269 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-505269 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-505269 -- rollout status deployment/busybox: (2.337225509s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-505269 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-505269 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-505269 -- exec busybox-7dff88458-2fh45 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-505269 -- exec busybox-7dff88458-hcgzg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-505269 -- exec busybox-7dff88458-psss7 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-505269 -- exec busybox-7dff88458-2fh45 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-505269 -- exec busybox-7dff88458-hcgzg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-505269 -- exec busybox-7dff88458-psss7 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-505269 -- exec busybox-7dff88458-2fh45 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-505269 -- exec busybox-7dff88458-hcgzg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-505269 -- exec busybox-7dff88458-psss7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-505269 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-505269 -- exec busybox-7dff88458-2fh45 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-505269 -- exec busybox-7dff88458-2fh45 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-505269 -- exec busybox-7dff88458-hcgzg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-505269 -- exec busybox-7dff88458-hcgzg -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-505269 -- exec busybox-7dff88458-psss7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-505269 -- exec busybox-7dff88458-psss7 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-505269 -v=7 --alsologtostderr
E0829 19:21:37.942873   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:21:37.949252   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:21:37.960716   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:21:37.982096   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:21:38.023460   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:21:38.104860   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:21:38.266611   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:21:38.588349   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:21:39.230415   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:21:40.512390   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:21:43.074662   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:21:48.196139   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-505269 -v=7 --alsologtostderr: (55.633301726s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-505269 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 cp testdata/cp-test.txt ha-505269:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269 "sudo cat /home/docker/cp-test.txt"
E0829 19:21:58.438368   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 cp ha-505269:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3454359662/001/cp-test_ha-505269.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 cp ha-505269:/home/docker/cp-test.txt ha-505269-m02:/home/docker/cp-test_ha-505269_ha-505269-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269-m02 "sudo cat /home/docker/cp-test_ha-505269_ha-505269-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 cp ha-505269:/home/docker/cp-test.txt ha-505269-m03:/home/docker/cp-test_ha-505269_ha-505269-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269-m03 "sudo cat /home/docker/cp-test_ha-505269_ha-505269-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 cp ha-505269:/home/docker/cp-test.txt ha-505269-m04:/home/docker/cp-test_ha-505269_ha-505269-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269-m04 "sudo cat /home/docker/cp-test_ha-505269_ha-505269-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 cp testdata/cp-test.txt ha-505269-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 cp ha-505269-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3454359662/001/cp-test_ha-505269-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 cp ha-505269-m02:/home/docker/cp-test.txt ha-505269:/home/docker/cp-test_ha-505269-m02_ha-505269.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269 "sudo cat /home/docker/cp-test_ha-505269-m02_ha-505269.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 cp ha-505269-m02:/home/docker/cp-test.txt ha-505269-m03:/home/docker/cp-test_ha-505269-m02_ha-505269-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269-m03 "sudo cat /home/docker/cp-test_ha-505269-m02_ha-505269-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 cp ha-505269-m02:/home/docker/cp-test.txt ha-505269-m04:/home/docker/cp-test_ha-505269-m02_ha-505269-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269-m04 "sudo cat /home/docker/cp-test_ha-505269-m02_ha-505269-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 cp testdata/cp-test.txt ha-505269-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 cp ha-505269-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3454359662/001/cp-test_ha-505269-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 cp ha-505269-m03:/home/docker/cp-test.txt ha-505269:/home/docker/cp-test_ha-505269-m03_ha-505269.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269 "sudo cat /home/docker/cp-test_ha-505269-m03_ha-505269.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 cp ha-505269-m03:/home/docker/cp-test.txt ha-505269-m02:/home/docker/cp-test_ha-505269-m03_ha-505269-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269-m02 "sudo cat /home/docker/cp-test_ha-505269-m03_ha-505269-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 cp ha-505269-m03:/home/docker/cp-test.txt ha-505269-m04:/home/docker/cp-test_ha-505269-m03_ha-505269-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269-m04 "sudo cat /home/docker/cp-test_ha-505269-m03_ha-505269-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 cp testdata/cp-test.txt ha-505269-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 cp ha-505269-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3454359662/001/cp-test_ha-505269-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 cp ha-505269-m04:/home/docker/cp-test.txt ha-505269:/home/docker/cp-test_ha-505269-m04_ha-505269.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269 "sudo cat /home/docker/cp-test_ha-505269-m04_ha-505269.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 cp ha-505269-m04:/home/docker/cp-test.txt ha-505269-m02:/home/docker/cp-test_ha-505269-m04_ha-505269-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269-m02 "sudo cat /home/docker/cp-test_ha-505269-m04_ha-505269-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 cp ha-505269-m04:/home/docker/cp-test.txt ha-505269-m03:/home/docker/cp-test_ha-505269-m04_ha-505269-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 ssh -n ha-505269-m03 "sudo cat /home/docker/cp-test_ha-505269-m04_ha-505269-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.476721777s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-505269 node delete m03 -v=7 --alsologtostderr: (15.791902144s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (463.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-505269 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0829 19:36:37.943460   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:38:01.008586   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:38:45.975311   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:41:37.943382   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-505269 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m42.503423458s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (463.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-505269 --control-plane -v=7 --alsologtostderr
E0829 19:43:45.974799   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-505269 --control-plane -v=7 --alsologtostderr: (1m18.715029465s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-505269 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (83.87s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-311435 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-311435 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m23.872139849s)
--- PASS: TestJSONOutput/start/Command (83.87s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-311435 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-311435 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-311435 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-311435 --output=json --user=testUser: (7.353018426s)
--- PASS: TestJSONOutput/stop/Command (7.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-313211 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-313211 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (58.98909ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"53b8fa9e-2284-44ea-a557-6b54a3ce10b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-313211] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c1de25a1-ea82-4312-ac36-cdb043932a26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19530"}}
	{"specversion":"1.0","id":"58bdc213-b4d2-469b-856b-954a7249996a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"28a93b74-723a-4790-9a89-d60831a0b06c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig"}}
	{"specversion":"1.0","id":"735f0130-542e-4a67-b44d-3e5df8d4bffc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube"}}
	{"specversion":"1.0","id":"2d041dfd-262c-43a3-a120-c2eee14c5a60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a7163187-8201-4d6a-a85d-0a934cb0d6e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f004b22d-8075-4e98-b6b5-0ca8ff11d5f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-313211" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-313211
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (89.14s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-518989 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-518989 --driver=kvm2  --container-runtime=crio: (43.576044989s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-525456 --driver=kvm2  --container-runtime=crio
E0829 19:46:37.943347   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:46:49.041280   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-525456 --driver=kvm2  --container-runtime=crio: (42.762210767s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-518989
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-525456
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-525456" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-525456
helpers_test.go:175: Cleaning up "first-518989" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-518989
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-518989: (1.001157478s)
--- PASS: TestMinikubeProfile (89.14s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (23.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-620528 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-620528 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (22.939010516s)
--- PASS: TestMountStart/serial/StartWithMountFirst (23.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-620528 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-620528 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.63s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-638331 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-638331 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.626635069s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-638331 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-638331 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-620528 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-638331 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-638331 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-638331
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-638331: (1.276195027s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.91s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-638331
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-638331: (20.912501995s)
--- PASS: TestMountStart/serial/RestartStopped (21.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-638331 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-638331 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (114.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-197790 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0829 19:48:45.974944   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-197790 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m54.33132139s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (114.72s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-197790 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-197790 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-197790 -- rollout status deployment/busybox: (2.530309482s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-197790 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-197790 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-197790 -- exec busybox-7dff88458-v4fv8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-197790 -- exec busybox-7dff88458-zglxg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-197790 -- exec busybox-7dff88458-v4fv8 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-197790 -- exec busybox-7dff88458-zglxg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-197790 -- exec busybox-7dff88458-v4fv8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-197790 -- exec busybox-7dff88458-zglxg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.96s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-197790 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-197790 -- exec busybox-7dff88458-v4fv8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-197790 -- exec busybox-7dff88458-v4fv8 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-197790 -- exec busybox-7dff88458-zglxg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-197790 -- exec busybox-7dff88458-zglxg -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-197790 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-197790 -v 3 --alsologtostderr: (48.410076629s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.95s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-197790 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 cp testdata/cp-test.txt multinode-197790:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 ssh -n multinode-197790 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 cp multinode-197790:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1846508817/001/cp-test_multinode-197790.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 ssh -n multinode-197790 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 cp multinode-197790:/home/docker/cp-test.txt multinode-197790-m02:/home/docker/cp-test_multinode-197790_multinode-197790-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 ssh -n multinode-197790 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 ssh -n multinode-197790-m02 "sudo cat /home/docker/cp-test_multinode-197790_multinode-197790-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 cp multinode-197790:/home/docker/cp-test.txt multinode-197790-m03:/home/docker/cp-test_multinode-197790_multinode-197790-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 ssh -n multinode-197790 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 ssh -n multinode-197790-m03 "sudo cat /home/docker/cp-test_multinode-197790_multinode-197790-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 cp testdata/cp-test.txt multinode-197790-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 ssh -n multinode-197790-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 cp multinode-197790-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1846508817/001/cp-test_multinode-197790-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 ssh -n multinode-197790-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 cp multinode-197790-m02:/home/docker/cp-test.txt multinode-197790:/home/docker/cp-test_multinode-197790-m02_multinode-197790.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 ssh -n multinode-197790-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 ssh -n multinode-197790 "sudo cat /home/docker/cp-test_multinode-197790-m02_multinode-197790.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 cp multinode-197790-m02:/home/docker/cp-test.txt multinode-197790-m03:/home/docker/cp-test_multinode-197790-m02_multinode-197790-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 ssh -n multinode-197790-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 ssh -n multinode-197790-m03 "sudo cat /home/docker/cp-test_multinode-197790-m02_multinode-197790-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 cp testdata/cp-test.txt multinode-197790-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 ssh -n multinode-197790-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 cp multinode-197790-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1846508817/001/cp-test_multinode-197790-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 ssh -n multinode-197790-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 cp multinode-197790-m03:/home/docker/cp-test.txt multinode-197790:/home/docker/cp-test_multinode-197790-m03_multinode-197790.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 ssh -n multinode-197790-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 ssh -n multinode-197790 "sudo cat /home/docker/cp-test_multinode-197790-m03_multinode-197790.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 cp multinode-197790-m03:/home/docker/cp-test.txt multinode-197790-m02:/home/docker/cp-test_multinode-197790-m03_multinode-197790-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 ssh -n multinode-197790-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 ssh -n multinode-197790-m02 "sudo cat /home/docker/cp-test_multinode-197790-m03_multinode-197790-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.87s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-197790 node stop m03: (1.530474527s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-197790 status: exit status 7 (405.194491ms)

                                                
                                                
-- stdout --
	multinode-197790
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-197790-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-197790-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-197790 status --alsologtostderr: exit status 7 (406.075496ms)

                                                
                                                
-- stdout --
	multinode-197790
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-197790-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-197790-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:51:12.758943   47862 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:51:12.759060   47862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:51:12.759071   47862 out.go:358] Setting ErrFile to fd 2...
	I0829 19:51:12.759077   47862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:51:12.759364   47862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 19:51:12.759584   47862 out.go:352] Setting JSON to false
	I0829 19:51:12.759618   47862 mustload.go:65] Loading cluster: multinode-197790
	I0829 19:51:12.759743   47862 notify.go:220] Checking for updates...
	I0829 19:51:12.760013   47862 config.go:182] Loaded profile config "multinode-197790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:51:12.760027   47862 status.go:255] checking status of multinode-197790 ...
	I0829 19:51:12.760383   47862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:51:12.760438   47862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:51:12.775476   47862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33369
	I0829 19:51:12.775907   47862 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:51:12.776536   47862 main.go:141] libmachine: Using API Version  1
	I0829 19:51:12.776564   47862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:51:12.776890   47862 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:51:12.777077   47862 main.go:141] libmachine: (multinode-197790) Calling .GetState
	I0829 19:51:12.778707   47862 status.go:330] multinode-197790 host status = "Running" (err=<nil>)
	I0829 19:51:12.778724   47862 host.go:66] Checking if "multinode-197790" exists ...
	I0829 19:51:12.779029   47862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:51:12.779068   47862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:51:12.793984   47862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35089
	I0829 19:51:12.794436   47862 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:51:12.794976   47862 main.go:141] libmachine: Using API Version  1
	I0829 19:51:12.795004   47862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:51:12.795301   47862 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:51:12.795468   47862 main.go:141] libmachine: (multinode-197790) Calling .GetIP
	I0829 19:51:12.798230   47862 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:51:12.798650   47862 main.go:141] libmachine: (multinode-197790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:87:d9", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:48:29 +0000 UTC Type:0 Mac:52:54:00:97:87:d9 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-197790 Clientid:01:52:54:00:97:87:d9}
	I0829 19:51:12.798670   47862 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined IP address 192.168.39.245 and MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:51:12.798770   47862 host.go:66] Checking if "multinode-197790" exists ...
	I0829 19:51:12.799067   47862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:51:12.799105   47862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:51:12.814957   47862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38665
	I0829 19:51:12.815365   47862 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:51:12.815807   47862 main.go:141] libmachine: Using API Version  1
	I0829 19:51:12.815826   47862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:51:12.816081   47862 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:51:12.816264   47862 main.go:141] libmachine: (multinode-197790) Calling .DriverName
	I0829 19:51:12.816447   47862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:51:12.816466   47862 main.go:141] libmachine: (multinode-197790) Calling .GetSSHHostname
	I0829 19:51:12.819329   47862 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:51:12.819639   47862 main.go:141] libmachine: (multinode-197790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:87:d9", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:48:29 +0000 UTC Type:0 Mac:52:54:00:97:87:d9 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-197790 Clientid:01:52:54:00:97:87:d9}
	I0829 19:51:12.819666   47862 main.go:141] libmachine: (multinode-197790) DBG | domain multinode-197790 has defined IP address 192.168.39.245 and MAC address 52:54:00:97:87:d9 in network mk-multinode-197790
	I0829 19:51:12.819791   47862 main.go:141] libmachine: (multinode-197790) Calling .GetSSHPort
	I0829 19:51:12.819940   47862 main.go:141] libmachine: (multinode-197790) Calling .GetSSHKeyPath
	I0829 19:51:12.820073   47862 main.go:141] libmachine: (multinode-197790) Calling .GetSSHUsername
	I0829 19:51:12.820206   47862 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/multinode-197790/id_rsa Username:docker}
	I0829 19:51:12.901994   47862 ssh_runner.go:195] Run: systemctl --version
	I0829 19:51:12.907695   47862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:51:12.921990   47862 kubeconfig.go:125] found "multinode-197790" server: "https://192.168.39.245:8443"
	I0829 19:51:12.922023   47862 api_server.go:166] Checking apiserver status ...
	I0829 19:51:12.922054   47862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:51:12.936735   47862 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1036/cgroup
	W0829 19:51:12.946221   47862 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1036/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:51:12.946286   47862 ssh_runner.go:195] Run: ls
	I0829 19:51:12.950337   47862 api_server.go:253] Checking apiserver healthz at https://192.168.39.245:8443/healthz ...
	I0829 19:51:12.955161   47862 api_server.go:279] https://192.168.39.245:8443/healthz returned 200:
	ok
	I0829 19:51:12.955180   47862 status.go:422] multinode-197790 apiserver status = Running (err=<nil>)
	I0829 19:51:12.955190   47862 status.go:257] multinode-197790 status: &{Name:multinode-197790 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:51:12.955217   47862 status.go:255] checking status of multinode-197790-m02 ...
	I0829 19:51:12.955533   47862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:51:12.955569   47862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:51:12.970619   47862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44371
	I0829 19:51:12.970979   47862 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:51:12.971437   47862 main.go:141] libmachine: Using API Version  1
	I0829 19:51:12.971460   47862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:51:12.971794   47862 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:51:12.971968   47862 main.go:141] libmachine: (multinode-197790-m02) Calling .GetState
	I0829 19:51:12.973402   47862 status.go:330] multinode-197790-m02 host status = "Running" (err=<nil>)
	I0829 19:51:12.973419   47862 host.go:66] Checking if "multinode-197790-m02" exists ...
	I0829 19:51:12.973706   47862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:51:12.973741   47862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:51:12.988293   47862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41267
	I0829 19:51:12.988606   47862 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:51:12.989021   47862 main.go:141] libmachine: Using API Version  1
	I0829 19:51:12.989037   47862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:51:12.989309   47862 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:51:12.989507   47862 main.go:141] libmachine: (multinode-197790-m02) Calling .GetIP
	I0829 19:51:12.991964   47862 main.go:141] libmachine: (multinode-197790-m02) DBG | domain multinode-197790-m02 has defined MAC address 52:54:00:62:6b:64 in network mk-multinode-197790
	I0829 19:51:12.992332   47862 main.go:141] libmachine: (multinode-197790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:6b:64", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:49:34 +0000 UTC Type:0 Mac:52:54:00:62:6b:64 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-197790-m02 Clientid:01:52:54:00:62:6b:64}
	I0829 19:51:12.992362   47862 main.go:141] libmachine: (multinode-197790-m02) DBG | domain multinode-197790-m02 has defined IP address 192.168.39.247 and MAC address 52:54:00:62:6b:64 in network mk-multinode-197790
	I0829 19:51:12.992495   47862 host.go:66] Checking if "multinode-197790-m02" exists ...
	I0829 19:51:12.992786   47862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:51:12.992844   47862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:51:13.007905   47862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34241
	I0829 19:51:13.008322   47862 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:51:13.008747   47862 main.go:141] libmachine: Using API Version  1
	I0829 19:51:13.008772   47862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:51:13.009061   47862 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:51:13.009219   47862 main.go:141] libmachine: (multinode-197790-m02) Calling .DriverName
	I0829 19:51:13.009401   47862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:51:13.009422   47862 main.go:141] libmachine: (multinode-197790-m02) Calling .GetSSHHostname
	I0829 19:51:13.012083   47862 main.go:141] libmachine: (multinode-197790-m02) DBG | domain multinode-197790-m02 has defined MAC address 52:54:00:62:6b:64 in network mk-multinode-197790
	I0829 19:51:13.012492   47862 main.go:141] libmachine: (multinode-197790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:6b:64", ip: ""} in network mk-multinode-197790: {Iface:virbr1 ExpiryTime:2024-08-29 20:49:34 +0000 UTC Type:0 Mac:52:54:00:62:6b:64 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-197790-m02 Clientid:01:52:54:00:62:6b:64}
	I0829 19:51:13.012520   47862 main.go:141] libmachine: (multinode-197790-m02) DBG | domain multinode-197790-m02 has defined IP address 192.168.39.247 and MAC address 52:54:00:62:6b:64 in network mk-multinode-197790
	I0829 19:51:13.012647   47862 main.go:141] libmachine: (multinode-197790-m02) Calling .GetSSHPort
	I0829 19:51:13.012790   47862 main.go:141] libmachine: (multinode-197790-m02) Calling .GetSSHKeyPath
	I0829 19:51:13.012930   47862 main.go:141] libmachine: (multinode-197790-m02) Calling .GetSSHUsername
	I0829 19:51:13.013070   47862 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19530-11185/.minikube/machines/multinode-197790-m02/id_rsa Username:docker}
	I0829 19:51:13.089733   47862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:51:13.104291   47862 status.go:257] multinode-197790-m02 status: &{Name:multinode-197790-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:51:13.104323   47862 status.go:255] checking status of multinode-197790-m03 ...
	I0829 19:51:13.104660   47862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:51:13.104698   47862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:51:13.120596   47862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40409
	I0829 19:51:13.120987   47862 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:51:13.121387   47862 main.go:141] libmachine: Using API Version  1
	I0829 19:51:13.121408   47862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:51:13.121694   47862 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:51:13.121876   47862 main.go:141] libmachine: (multinode-197790-m03) Calling .GetState
	I0829 19:51:13.123433   47862 status.go:330] multinode-197790-m03 host status = "Stopped" (err=<nil>)
	I0829 19:51:13.123445   47862 status.go:343] host is not running, skipping remaining checks
	I0829 19:51:13.123451   47862 status.go:257] multinode-197790-m03 status: &{Name:multinode-197790-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 node start m03 -v=7 --alsologtostderr
E0829 19:51:37.943710   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-197790 node start m03 -v=7 --alsologtostderr: (37.928706924s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-197790 node delete m03: (1.69321378s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (184.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-197790 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0829 20:01:37.943132   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-197790 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m4.369119312s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-197790 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (184.88s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-197790
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-197790-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-197790-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (58.213581ms)

                                                
                                                
-- stdout --
	* [multinode-197790-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19530
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-197790-m02' is duplicated with machine name 'multinode-197790-m02' in profile 'multinode-197790'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-197790-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-197790-m03 --driver=kvm2  --container-runtime=crio: (43.864313626s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-197790
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-197790: exit status 80 (207.257692ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-197790 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-197790-m03 already exists in multinode-197790-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-197790-m03
E0829 20:03:29.043044   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.11s)

                                                
                                    
x
+
TestScheduledStopUnix (117s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-051586 --memory=2048 --driver=kvm2  --container-runtime=crio
E0829 20:08:45.975197   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-051586 --memory=2048 --driver=kvm2  --container-runtime=crio: (45.471217014s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-051586 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-051586 -n scheduled-stop-051586
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-051586 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-051586 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-051586 -n scheduled-stop-051586
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-051586
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-051586 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-051586
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-051586: exit status 7 (64.020047ms)

                                                
                                                
-- stdout --
	scheduled-stop-051586
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-051586 -n scheduled-stop-051586
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-051586 -n scheduled-stop-051586: exit status 7 (64.530542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-051586" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-051586
--- PASS: TestScheduledStopUnix (117.00s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (158.87s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3275856127 start -p running-upgrade-892203 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3275856127 start -p running-upgrade-892203 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m21.539177831s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-892203 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-892203 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m15.681462408s)
helpers_test.go:175: Cleaning up "running-upgrade-892203" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-892203
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-892203: (1.146332833s)
--- PASS: TestRunningBinaryUpgrade (158.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-468350 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-468350 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (82.522842ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-468350] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19530
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (119.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-468350 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-468350 --driver=kvm2  --container-runtime=crio: (1m59.578673468s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-468350 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (119.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (127.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1548364908 start -p stopped-upgrade-453495 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0829 20:11:21.012233   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
E0829 20:11:37.943487   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1548364908 start -p stopped-upgrade-453495 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m18.506952267s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1548364908 -p stopped-upgrade-453495 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1548364908 -p stopped-upgrade-453495 stop: (1.429036689s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-453495 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-453495 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (47.472058118s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (127.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (39.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-468350 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-468350 --no-kubernetes --driver=kvm2  --container-runtime=crio: (38.183098731s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-468350 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-468350 status -o json: exit status 2 (222.382375ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-468350","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-468350
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-468350: (1.042156102s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (39.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (38.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-468350 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-468350 --no-kubernetes --driver=kvm2  --container-runtime=crio: (38.747650045s)
--- PASS: TestNoKubernetes/serial/Start (38.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-453495
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-468350 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-468350 "sudo systemctl is-active --quiet service kubelet": exit status 1 (737.54487ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.36s)

                                                
                                    
x
+
TestPause/serial/Start (83.98s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-427304 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-427304 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m23.980938047s)
--- PASS: TestPause/serial/Start (83.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-468350
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-468350: (1.293893654s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (44.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-468350 --driver=kvm2  --container-runtime=crio
E0829 20:13:45.975613   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-468350 --driver=kvm2  --container-runtime=crio: (44.039750253s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (44.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-468350 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-468350 "sudo systemctl is-active --quiet service kubelet": exit status 1 (194.516484ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (72.01s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-427304 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-427304 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m11.989153833s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (72.01s)

                                                
                                    
x
+
TestPause/serial/Pause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-427304 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.78s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-427304 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-427304 --output=json --layout=cluster: exit status 2 (247.62137ms)

                                                
                                                
-- stdout --
	{"Name":"pause-427304","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-427304","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-427304 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.76s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.92s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-427304 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.92s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.4s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-427304 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-427304 --alsologtostderr -v=5: (1.39505422s)
--- PASS: TestPause/serial/DeletePaused (1.40s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (5.22s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (5.219914432s)
--- PASS: TestPause/serial/VerifyDeletedResources (5.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-801672 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-801672 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (107.360009ms)

                                                
                                                
-- stdout --
	* [false-801672] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19530
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 20:16:06.357373   61208 out.go:345] Setting OutFile to fd 1 ...
	I0829 20:16:06.357480   61208 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:16:06.357491   61208 out.go:358] Setting ErrFile to fd 2...
	I0829 20:16:06.357497   61208 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 20:16:06.357780   61208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-11185/.minikube/bin
	I0829 20:16:06.358443   61208 out.go:352] Setting JSON to false
	I0829 20:16:06.359677   61208 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7113,"bootTime":1724955453,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 20:16:06.359757   61208 start.go:139] virtualization: kvm guest
	I0829 20:16:06.361928   61208 out.go:177] * [false-801672] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 20:16:06.363361   61208 out.go:177]   - MINIKUBE_LOCATION=19530
	I0829 20:16:06.363398   61208 notify.go:220] Checking for updates...
	I0829 20:16:06.366002   61208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 20:16:06.367364   61208 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19530-11185/kubeconfig
	I0829 20:16:06.368742   61208 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-11185/.minikube
	I0829 20:16:06.370168   61208 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 20:16:06.371592   61208 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 20:16:06.373573   61208 config.go:182] Loaded profile config "cert-expiration-621378": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:16:06.373736   61208 config.go:182] Loaded profile config "kubernetes-upgrade-714305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 20:16:06.373852   61208 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 20:16:06.411659   61208 out.go:177] * Using the kvm2 driver based on user configuration
	I0829 20:16:06.412913   61208 start.go:297] selected driver: kvm2
	I0829 20:16:06.412928   61208 start.go:901] validating driver "kvm2" against <nil>
	I0829 20:16:06.412941   61208 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 20:16:06.414839   61208 out.go:201] 
	W0829 20:16:06.416108   61208 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0829 20:16:06.417251   61208 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-801672 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-801672

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-801672

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-801672

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-801672

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-801672

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-801672

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-801672

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-801672

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-801672

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-801672

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-801672

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-801672" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-801672" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 29 Aug 2024 20:14:47 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.72.51:8443
name: cert-expiration-621378
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 29 Aug 2024 20:16:03 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.39.140:8443
name: kubernetes-upgrade-714305
contexts:
- context:
cluster: cert-expiration-621378
extensions:
- extension:
last-update: Thu, 29 Aug 2024 20:14:47 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-621378
name: cert-expiration-621378
- context:
cluster: kubernetes-upgrade-714305
extensions:
- extension:
last-update: Thu, 29 Aug 2024 20:16:03 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: kubernetes-upgrade-714305
name: kubernetes-upgrade-714305
current-context: kubernetes-upgrade-714305
kind: Config
preferences: {}
users:
- name: cert-expiration-621378
user:
client-certificate: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-expiration-621378/client.crt
client-key: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-expiration-621378/client.key
- name: kubernetes-upgrade-714305
user:
client-certificate: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/client.crt
client-key: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-801672

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-801672"

                                                
                                                
----------------------- debugLogs end: false-801672 [took: 2.752623333s] --------------------------------
helpers_test.go:175: Cleaning up "false-801672" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-801672
--- PASS: TestNetworkPlugins/group/false (3.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (133.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-397724 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-397724 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (2m13.559683899s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (133.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (141.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-388383 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0829 20:16:37.943194   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-388383 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (2m21.221118568s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (141.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-397724 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [11d26be1-7021-4b87-9b36-f70b69e0dd43] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [11d26be1-7021-4b87-9b36-f70b69e0dd43] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004736954s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-397724 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-695305 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-695305 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (47.894329534s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-397724 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-397724 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-388383 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bfe9fc37-9a64-407f-a902-5c1930185329] Pending
helpers_test.go:344: "busybox" [bfe9fc37-9a64-407f-a902-5c1930185329] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bfe9fc37-9a64-407f-a902-5c1930185329] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003596768s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-388383 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-388383 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-388383 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-695305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-695305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.194716514s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-695305 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-695305 --alsologtostderr -v=3: (11.333506153s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-695305 -n newest-cni-695305
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-695305 -n newest-cni-695305: exit status 7 (63.721642ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-695305 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (70.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-695305 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0829 20:20:09.045424   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-695305 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m10.350175378s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-695305 -n newest-cni-695305
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (70.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-695305 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-695305 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-695305 -n newest-cni-695305
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-695305 -n newest-cni-695305: exit status 2 (232.943317ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-695305 -n newest-cni-695305
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-695305 -n newest-cni-695305: exit status 2 (226.783526ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-695305 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-695305 -n newest-cni-695305
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-695305 -n newest-cni-695305
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-145096 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-145096 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (52.323226942s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (679.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-397724 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-397724 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (11m18.843911064s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-397724 -n no-preload-397724
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (679.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (569.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-388383 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-388383 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (9m29.735597474s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-388383 -n embed-certs-388383
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (569.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-145096 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [90790e9d-2eb6-41f7-b252-73e8a484bbe7] Pending
helpers_test.go:344: "busybox" [90790e9d-2eb6-41f7-b252-73e8a484bbe7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0829 20:21:37.943601   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [90790e9d-2eb6-41f7-b252-73e8a484bbe7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003802958s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-145096 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-145096 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-145096 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-032002 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-032002 --alsologtostderr -v=3: (4.284978365s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-032002 -n old-k8s-version-032002
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-032002 -n old-k8s-version-032002: exit status 7 (63.845354ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-032002 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (459.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-145096 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0829 20:26:37.942801   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
E0829 20:28:01.013971   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
E0829 20:28:45.975751   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/addons-344587/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-145096 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (7m39.423455448s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-145096 -n default-k8s-diff-port-145096
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (459.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (83.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-801672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-801672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m23.628057209s)
--- PASS: TestNetworkPlugins/group/auto/Start (83.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (85.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-801672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-801672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m25.993803989s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (85.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (124.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-801672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0829 20:46:37.943590   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/functional-936043/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-801672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (2m4.813402908s)
--- PASS: TestNetworkPlugins/group/calico/Start (124.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-801672 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-801672 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6dj9s" [4ceb8088-41da-4a92-a9a3-30615481b693] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6dj9s" [4ceb8088-41da-4a92-a9a3-30615481b693] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004482089s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-227qv" [1335cb47-566a-47ff-b0eb-cfcd0a40c46b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005334651s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-801672 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-801672 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9xtl8" [95c57fa4-4a15-4749-877a-fee86d8d4a38] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9xtl8" [95c57fa4-4a15-4749-877a-fee86d8d4a38] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.005738612s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-801672 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-801672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-801672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-801672 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-801672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-801672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-801672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-801672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m10.121965179s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (71.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-801672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-801672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m11.892720577s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (71.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-6676z" [ea40ba40-00b8-4ebb-8a5a-fc518e32cd82] Running
E0829 20:48:23.562864   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/client.crt: no such file or directory" logger="UnhandledError"
E0829 20:48:23.569239   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/client.crt: no such file or directory" logger="UnhandledError"
E0829 20:48:23.580639   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/client.crt: no such file or directory" logger="UnhandledError"
E0829 20:48:23.602004   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/client.crt: no such file or directory" logger="UnhandledError"
E0829 20:48:23.643370   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/client.crt: no such file or directory" logger="UnhandledError"
E0829 20:48:23.725028   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/client.crt: no such file or directory" logger="UnhandledError"
E0829 20:48:23.886522   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/client.crt: no such file or directory" logger="UnhandledError"
E0829 20:48:24.208213   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/client.crt: no such file or directory" logger="UnhandledError"
E0829 20:48:24.849729   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/client.crt: no such file or directory" logger="UnhandledError"
E0829 20:48:26.131581   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005126605s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-801672 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-801672 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-htp99" [657a1f75-65f7-4edb-bdf8-afcb34632496] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0829 20:48:28.693706   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/client.crt: no such file or directory" logger="UnhandledError"
E0829 20:48:33.815354   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-htp99" [657a1f75-65f7-4edb-bdf8-afcb34632496] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003964066s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-801672 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-801672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-801672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (70.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-801672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0829 20:49:04.539882   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-801672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m10.192454693s)
--- PASS: TestNetworkPlugins/group/flannel/Start (70.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-801672 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-801672 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6wq9x" [0f755a12-d2f0-467a-86fc-62edda59a963] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6wq9x" [0f755a12-d2f0-467a-86fc-62edda59a963] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004262322s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-801672 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-801672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-801672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-801672 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-801672 replace --force -f testdata/netcat-deployment.yaml
E0829 20:49:30.841652   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.crt: no such file or directory" logger="UnhandledError"
E0829 20:49:30.848047   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.crt: no such file or directory" logger="UnhandledError"
E0829 20:49:30.859414   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.crt: no such file or directory" logger="UnhandledError"
E0829 20:49:30.880781   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.crt: no such file or directory" logger="UnhandledError"
E0829 20:49:30.922265   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.crt: no such file or directory" logger="UnhandledError"
E0829 20:49:31.003770   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:149: (dbg) Done: kubectl --context enable-default-cni-801672 replace --force -f testdata/netcat-deployment.yaml: (1.187204036s)
E0829 20:49:31.166048   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.crt: no such file or directory" logger="UnhandledError"
E0829 20:49:31.489978   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6mc7x" [eee8f434-e68d-4942-94ca-8ac31d87b08b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0829 20:49:32.131410   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.crt: no such file or directory" logger="UnhandledError"
E0829 20:49:33.412817   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.crt: no such file or directory" logger="UnhandledError"
E0829 20:49:35.975145   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-6mc7x" [eee8f434-e68d-4942-94ca-8ac31d87b08b] Running
E0829 20:49:41.097059   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004111426s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (16.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-801672 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-801672 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.162443445s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-801672 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (16.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (57.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-801672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0829 20:49:45.502202   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/client.crt: no such file or directory" logger="UnhandledError"
E0829 20:49:51.338412   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-801672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (57.157387778s)
--- PASS: TestNetworkPlugins/group/bridge/Start (57.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-801672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-801672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-145096 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-145096 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-145096 -n default-k8s-diff-port-145096
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-145096 -n default-k8s-diff-port-145096: exit status 2 (231.57024ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-145096 -n default-k8s-diff-port-145096
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-145096 -n default-k8s-diff-port-145096: exit status 2 (242.810127ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-145096 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-145096 -n default-k8s-diff-port-145096
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-145096 -n default-k8s-diff-port-145096
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-knpnf" [04bcf6b1-5a80-43a3-8500-61b3c7f2c068] Running
E0829 20:50:11.820387   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004595103s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-801672 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-801672 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wrnbh" [36664307-526a-4e09-b6b1-e0661c865776] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-wrnbh" [36664307-526a-4e09-b6b1-e0661c865776] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.016933114s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-801672 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-801672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-801672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-801672 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-801672 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4rdtt" [9254f594-ab0c-45ed-a39e-11565df02231] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4rdtt" [9254f594-ab0c-45ed-a39e-11565df02231] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005542572s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (25.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-801672 exec deployment/netcat -- nslookup kubernetes.default
E0829 20:50:52.782412   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/old-k8s-version-032002/client.crt: no such file or directory" logger="UnhandledError"
E0829 20:51:07.423795   18361 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/no-preload-397724/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-801672 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137538663s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-801672 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context bridge-801672 exec deployment/netcat -- nslookup kubernetes.default: (10.145682695s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (25.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-801672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-801672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (37/320)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.0/cached-images 0
15 TestDownloadOnly/v1.31.0/binaries 0
16 TestDownloadOnly/v1.31.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
38 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
130 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
131 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
134 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
135 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
265 TestStartStop/group/disable-driver-mounts 0.14
281 TestNetworkPlugins/group/kubenet 3.3
291 TestNetworkPlugins/group/cilium 3.16
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-962462" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-962462
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-801672 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-801672

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-801672

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-801672

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-801672

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-801672

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-801672

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-801672

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-801672

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-801672

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-801672

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-801672

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-801672" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-801672" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 29 Aug 2024 20:14:47 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.72.51:8443
name: cert-expiration-621378
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 29 Aug 2024 20:16:03 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.83.47:8555
name: cert-options-323073
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 29 Aug 2024 20:16:03 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.39.140:8443
name: kubernetes-upgrade-714305
contexts:
- context:
cluster: cert-expiration-621378
extensions:
- extension:
last-update: Thu, 29 Aug 2024 20:14:47 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-621378
name: cert-expiration-621378
- context:
cluster: cert-options-323073
extensions:
- extension:
last-update: Thu, 29 Aug 2024 20:16:03 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-options-323073
name: cert-options-323073
- context:
cluster: kubernetes-upgrade-714305
extensions:
- extension:
last-update: Thu, 29 Aug 2024 20:16:03 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: kubernetes-upgrade-714305
name: kubernetes-upgrade-714305
current-context: kubernetes-upgrade-714305
kind: Config
preferences: {}
users:
- name: cert-expiration-621378
user:
client-certificate: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-expiration-621378/client.crt
client-key: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-expiration-621378/client.key
- name: cert-options-323073
user:
client-certificate: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/client.crt
client-key: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-options-323073/client.key
- name: kubernetes-upgrade-714305
user:
client-certificate: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/client.crt
client-key: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/kubernetes-upgrade-714305/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-801672

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-801672"

                                                
                                                
----------------------- debugLogs end: kubenet-801672 [took: 3.150417228s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-801672" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-801672
--- SKIP: TestNetworkPlugins/group/kubenet (3.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-801672 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-801672

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-801672

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-801672

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-801672

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-801672

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-801672

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-801672

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-801672

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-801672

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-801672

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-801672

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-801672" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-801672

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-801672

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-801672

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-801672

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-801672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-801672" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19530-11185/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 29 Aug 2024 20:14:47 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.72.51:8443
name: cert-expiration-621378
contexts:
- context:
cluster: cert-expiration-621378
extensions:
- extension:
last-update: Thu, 29 Aug 2024 20:14:47 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-621378
name: cert-expiration-621378
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-621378
user:
client-certificate: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-expiration-621378/client.crt
client-key: /home/jenkins/minikube-integration/19530-11185/.minikube/profiles/cert-expiration-621378/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-801672

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-801672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-801672"

                                                
                                                
----------------------- debugLogs end: cilium-801672 [took: 3.002942429s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-801672" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-801672
--- SKIP: TestNetworkPlugins/group/cilium (3.16s)

                                                
                                    
Copied to clipboard